ERIC Educational Resources Information Center
Lane, Kathleen Lynne; Oakes, Wendy Peia; Jenkins, Abbie; Menzies, Holly Mariah; Kalberg, Jemma Robertson
2014-01-01
Comprehensive, integrated, three-tiered models are context specific and developed by school-site teams according to the core values held by the school community. In this article, the authors provide a step-by-step, team-based process for designing comprehensive, integrated, three-tiered models of prevention that integrate academic, behavioral, and…
Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fernando
2004-01-01
The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.
The HPT Model Applied to a Kayak Company's Registration Process
ERIC Educational Resources Information Center
Martin, Florence; Hall, Herman A., IV; Blakely, Amanda; Gayford, Matthew C.; Gunter, Erin
2009-01-01
This case study describes the step-by-step application of the traditional human performance technology (HPT) model at a premier kayak company located on the coast of North Carolina. The HPT model was applied to address lost revenues related to three specific business issues: misinformed customers, dissatisfied customers, and guides not showing up…
An overview of three main types of simulation approach (explanatory, abstraction, and estimation) is presented, along with a discussion of their capabilities limitations, and the steps required for their validation. A process model being developed through the Forest Response Prog...
Marshall, Teresa A; Marchini, Leonardo; Cowen, Howard; Hartshorn, Jennifer E; Holloway, Julie A; Straub-Morarend, Cheryl L; Gratton, David; Solow, Catherine M; Colangelo, Nicholas; Johnsen, David C
2017-08-01
Critical thinking skills are essential for the successful dentist, yet few explicit skillsets in critical thinking have been developed and published in peer-reviewed literature. The aims of this article are to 1) offer an assessable critical thinking teaching model with the expert's thought process as the outcome, learning guide, and assessment instrument and 2) offer three critical thinking skillsets following this model: for geriatric risk assessment, technology decision making, and situation analysis/reflections. For the objective component, the student demonstrates delivery of each step in the thought process. For the subjective component, the student is judged to have grasped the principles as applied to the patient or case. This article describes the framework and the results of pilot tests in which students in one year at this school used the model in the three areas, earning scores of 90% or above on the assessments. The model was thus judged to be successful for students to demonstrate critical thinking skillsets in the course settings. Students consistently delivered each step of the thought process and were nearly as consistent in grasping the principles behind each step. As more critical thinking skillsets are implemented, a reinforcing network develops.
NASA Astrophysics Data System (ADS)
Nguyen, Duy
2012-07-01
Digital Elevation Models (DEMs) are used in many applications in the context of earth sciences such as in topographic mapping, environmental modeling, rainfall-runoff studies, landslide hazard zonation, seismic source modeling, etc. During the last years multitude of scientific applications of Synthetic Aperture Radar Interferometry (InSAR) techniques have evolved. It has been shown that InSAR is an established technique of generating high quality DEMs from space borne and airborne data, and that it has advantages over other methods for the generation of large area DEM. However, the processing of InSAR data is still a challenging task. This paper describes InSAR operational steps and processing chain for DEM generation from Single Look Complex (SLC) SAR data and compare a satellite SAR estimate of surface elevation with a digital elevation model (DEM) from Topography map. The operational steps are performed in three major stages: Data Search, Data Processing, and product Validation. The Data processing stage is further divided into five steps of Data Pre-Processing, Co-registration, Interferogram generation, Phase unwrapping, and Geocoding. The Data processing steps have been tested with ERS 1/2 data using Delft Object-oriented Interferometric (DORIS) InSAR processing software. Results of the outcome of the application of the described processing steps to real data set are presented.
Foss, A.; Cree, I.; Dolin, P.; Hungerford, J.
1999-01-01
BACKGROUND/AIM—There has been no consistent pattern reported on how mortality for uveal melanoma varies with age. This information can be useful to model the complexity of the disease. The authors have examined ocular cancer trends, as an indirect measure for uveal melanoma mortality, to see how rates vary with age and to compare the results with their other studies on predicting metastatic disease. METHODS—Age specific mortality was examined for England and Wales, the USA, and Canada. A log-log model was fitted to the data. The slopes of the log-log plots were used as measure of disease complexity and compared with the results of previous work on predicting metastatic disease. RESULTS—The log-log model provided a good fit for the US and Canadian data, but the observed rates deviated for England and Wales among people over the age of 65 years. The log-log model for mortality data suggests that the underlying process depends upon four rate limiting steps, while a similar model for the incidence data suggests between three and four rate limiting steps. Further analysis of previous data on predicting metastatic disease on the basis of tumour size and blood vessel density would indicate a single rate limiting step between developing the primary tumour and developing metastatic disease. CONCLUSIONS—There is significant underreporting or underdiagnosis of ocular melanoma for England and Wales in those over the age of 65 years. In those under the age of 65, a model is presented for ocular melanoma oncogenesis requiring three rate limiting steps to develop the primary tumour and a fourth rate limiting step to develop metastatic disease. The three steps in the generation of the primary tumour involve two key processes—namely, growth and angiogenesis within the primary tumour. The step from development of the primary to development of metastatic disease is likely to involve a single rate limiting process. PMID:10216060
Electron correlations and pre-collision in the re-collision picture of high harmonic generation
NASA Astrophysics Data System (ADS)
Mašín, Zdeněk; Harvey, Alex G.; Spanner, Michael; Patchkovskii, Serguei; Ivanov, Misha; Smirnova, Olga
2018-07-01
We discuss the seminal three-step model and the re-collision picture in the context of high harmonic generation in molecules. In particular, we stress the importance of multi-electron correlation during the first and the third of the three steps of the process: (1) the strong-field ionization and (3) the recombination. We point out how an accurate account of multi-electron correlations during the third recombination step allows one to gauge the importance of pre-collision: the term coined by Eberly (n.d. private communication) to describe unusual pathways during the first, ionization, step.
Volume Diffusion Growth Kinetics and Step Geometry in Crystal Growth
NASA Technical Reports Server (NTRS)
Mazuruk, Konstantin; Ramachandran, Narayanan
1998-01-01
The role of step geometry in two-dimensional stationary volume diff4sion process used in crystal growth kinetics models is investigated. Three different interface shapes: a) a planar interface, b) an equidistant hemispherical bumps train tAx interface, and c) a train of right angled steps, are used in this comparative study. The ratio of the super-saturation to the diffusive flux at the step position is used as a control parameter. The value of this parameter can vary as much as 50% for different geometries. An approximate analytical formula is derived for the right angled steps geometry. In addition to the kinetic models, this formula can be utilized in macrostep growth models. Finally, numerical modeling of the diffusive and convective transport for equidistant steps is conducted. In particular, the role of fluid flow resulting from the advancement of steps and its contribution to the transport of species to the steps is investigated.
Koken, Juline A.; Naar-King, Sylvie; Umasa, Sanya; Parsons, Jeffrey T.; Saengcharnchai, Pichai; Phanuphak, Praphan; Rongkavilit, Chokechai
2013-01-01
The provision of culturally relevant yet evidence-based interventions has become crucial to global HIV prevention and treatment efforts. In Thailand, where treatment for HIV has become widely available, medication adherence and risk behaviors remain an issue for Thai youth living with HIV. Previous research on motivational interviewing (MI) has proven effective in promoting medication adherence and HIV risk reduction in the United States. However, to test the efficacy of MI in the Thai context a feasible method for monitoring treatment fidelity must be implemented. This article describes a collaborative three-step process model for implementing the MI Treatment Integrity (MITI) across cultures while identifying linguistic issues that the English-originated MITI was not designed to detect as part of a larger intervention for Thai youth living with HIV. Step 1 describes the training of the Thai MITI coder, Step 2 describes identifying cultural and linguistic issues unique to the Thai context, and Step 3 describes an MITI booster training and incorporation of the MITI feedback into supervision and team discussion. Throughout the process the research team collaborated to implement the MITI while creating additional ways to evaluate in-session processes that the MITI is not designed to detect. The feasibility of using the MITI as a measure of treatment fidelity for MI delivered in the Thai linguistic and cultural context is discussed. PMID:22228776
A three-talk model for shared decision making: multistage consultation process
Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy
2017-01-01
Objectives To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design Multistage consultation process. Setting Key informant group, communities of interest, and survey of clinical specialties. Participants 19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on “team talk,” “option talk,” and “decision talk,” to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. PMID:29109079
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel; La Madrid, Raúl
2017-12-01
In the northern coastal and jungle areas of Peru, cocoa beans are dried using artisan methods, such as direct exposure to sunlight. This traditional process is time intensive, leading to a reduction in productivity and, therefore, delays in delivery times. The present study was intended to numerically characterise the thermal behaviour of three configurations of solar air heating collectors in order to determine which demonstrated the best thermal performance under several controlled operating conditions. For this purpose, a computational fluid dynamics model was developed to describe the simultaneous convective and radiative heat transfer phenomena under several operation conditions. The constructed computational fluid dynamics model was firstly validated through comparison with the data measurements of a one-step solar air heating collector. We then simulated two further three-step solar air heating collectors in order to identify which demonstrated the best thermal performance in terms of outlet air temperature and thermal efficiency. The numerical results show that under the same solar irradiation area of exposition and operating conditions, the three-step solar air heating collector with the collector plate mounted between the second and third channels was 67% more thermally efficient compared to the one-step solar air heating collector. This is because the air exposition with the surface of the collector plate for the three-step solar air heating collector former device was twice than the one-step solar air heating collector. Copyright © 2017 Elsevier Ltd. All rights reserved.
A step-by-step methodology for enterprise interoperability projects
NASA Astrophysics Data System (ADS)
Chalmeta, Ricardo; Pazos, Verónica
2015-05-01
Enterprise interoperability is one of the key factors for enhancing enterprise competitiveness. Achieving enterprise interoperability is an extremely complex process which involves different technological, human and organisational elements. In this paper we present a framework to help enterprise interoperability. The framework has been developed taking into account the three domains of interoperability: Enterprise Modelling, Architecture and Platform and Ontologies. The main novelty of the framework in comparison to existing ones is that it includes a step-by-step methodology that explains how to carry out an enterprise interoperability project taking into account different interoperability views, like business, process, human resources, technology, knowledge and semantics.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
A kinetic study of struvite precipitation recycling technology with NaOH/Mg(OH)2 addition.
Yu, Rongtai; Ren, Hongqiang; Wang, Yanru; Ding, Lili; Geng, Jingji; Xu, Ke; Zhang, Yan
2013-09-01
Struvite precipitation recycling technology is received wide attention in removal ammonium and phosphate out of wastewater. While past study focused on process efficiency, and less on kinetics. The kinetic study is essential for the design and optimization in the application of struvite precipitation recycling technology. The kinetics of struvite with NaOH/Mg(OH)2 addition were studied by thermogravimetry analysis with three rates (5, 10, 20 °C/min), using Friedman method and Ozawa-Flynn-Wall method, respectively. Degradation process of struvite with NaOH/Mg(OH)2 addition was three steps. The stripping of ammonia from struvite was mainly occurred at the first step. In the first step, the activation energy was about 70 kJ/mol, which has gradually declined as the reaction progress. By model fitting studies, the proper mechanism function for struvite decomposition process with NaOH/Mg(OH)2 addition was revealed. The mechanism function was f(α)=α(α)-(1-α)(n), a Prout-Tompkins nth order (Bna) model. Copyright © 2013 Elsevier Ltd. All rights reserved.
Network Modeling Reveals Steps in Angiotensin Peptide Processing
Schwacke, John H.; Spainhour, John Christian G.; Ierardi, Jessalyn L.; Chaves, Jose M.; Arthur, John M.; Janech, Michael G.; Velez, Juan Carlos Q.
2015-01-01
New insights into the intrarenal renin-angiotensin system (RAS) have modified our traditional view of the system. However, many finer details of this network of peptides and associated peptidases remain unclear. We hypothesized that a computational systems biology approach, applied to peptidomic data, could help to unravel the network of enzymatic conversions. We built and refined a Bayesian network model and a dynamic systems model starting from a skeleton created with established elements of the RAS and further developed it with archived MALDI-TOF mass spectra from experiments conducted in mouse podocytes exposed to exogenous angiotensin (Ang) substrates. The model-building process suggested previously unrecognized steps, three of which were confirmed in vitro, including the conversion of Ang(2-10) to Ang(2-7) by neprilysin (NEP), and Ang(1-9) to Ang(2-9) and Ang(1-7) to Ang(2-7) by aminopeptidase A (APA). These data suggest a wider role of NEP and APA in glomerular formation of bioactive Ang peptides and/or shunting their formation. Other steps were also suggested by the model and supporting evidence for those steps was evaluated using model-comparison methods. Our results demonstrate that systems biology methods applied to peptidomic data are effective in identifying novel steps in the Ang peptide processing network, and these findings improve our understanding of the glomerular RAS. PMID:23283355
A three-talk model for shared decision making: multistage consultation process.
Elwyn, Glyn; Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy
2017-11-06
Objectives To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design Multistage consultation process. Setting Key informant group, communities of interest, and survey of clinical specialties. Participants 19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on "team talk," "option talk," and "decision talk," to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Using a contextualized sensemaking model for interaction design: A case study of tumor contouring.
Aselmaa, Anet; van Herk, Marcel; Laprie, Anne; Nestle, Ursula; Götz, Irina; Wiedenmann, Nicole; Schimek-Jasch, Tanja; Picaud, Francois; Syrykh, Charlotte; Cagetti, Leonel V; Jolnerovski, Maria; Song, Yu; Goossens, Richard H M
2017-01-01
Sensemaking theories help designers understand the cognitive processes of a user when he/she performs a complicated task. This paper introduces a two-step approach of incorporating sensemaking support within the design of health information systems by: (1) modeling the sensemaking process of physicians while performing a task, and (2) identifying software interaction design requirements that support sensemaking based on this model. The two-step approach is presented based on a case study of the tumor contouring clinical task for radiotherapy planning. In the first step of the approach, a contextualized sensemaking model was developed to describe the sensemaking process based on the goal, the workflow and the context of the task. In the second step, based on a research software prototype, an experiment was conducted where three contouring tasks were performed by eight physicians respectively. Four types of navigation interactions and five types of interaction sequence patterns were identified by analyzing the gathered interaction log data from those twenty-four cases. Further in-depth study on each of the navigation interactions and interaction sequence patterns in relation to the contextualized sensemaking model revealed five main areas for design improvements to increase sensemaking support. Outcomes of the case study indicate that the proposed two-step approach was beneficial for gaining a deeper understanding of the sensemaking process during the task, as well as for identifying design requirements for better sensemaking support. Copyright © 2016. Published by Elsevier Inc.
Shaw, Tim; Barnet, Stewart; Mcgregor, Deborah; Avery, Jennifer
2015-01-01
Online learning is a primary delivery method for continuing health education programs. It is critical that programs have curricula objectives linked to educational models that support learning. Using a proven educational modelling process ensures that curricula objectives are met and a solid basis for learning and assessment is achieved. To develop an educational design model that produces an educationally sound program development plan for use by anyone involved in online course development. We have described the development of a generic educational model designed for continuing health education programs. The Knowledge, Process, Practice (KPP) model is founded on recognised educational theory and online education practice. This paper presents a step-by-step guide on using this model for program development that encases reliable learning and evaluation. The model supports a three-step approach, KPP, based on learning outcomes and supporting appropriate assessment activities. It provides a program structure for online or blended learning that is explicit, educationally defensible, and supports multiple assessment points for health professionals. The KPP model is based on best practice educational design using a structure that can be adapted for a variety of online or flexibly delivered postgraduate medical education programs.
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard
2017-08-01
This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.
Self-Disclosure and Satisfaction in Marriage: The Relation Examined.
ERIC Educational Resources Information Center
Jorgensen, Stephen R.; Gaudy, Janis C.
1980-01-01
In tests of three models of self-disclosure and satisfaction in marriage, only the linear model achieved substantial support. Communication about relatively personal and intimate matters constitutes an important step in the process of need and goal fulfillment in marriage. (Author)
An intraorganizational model for developing and spreading quality improvement innovations.
Kellogg, Katherine C; Gainer, Lindsay A; Allen, Adrienne S; OʼSullivan, Tatum; Singer, Sara J
Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group's traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread.
An intraorganizational model for developing and spreading quality improvement innovations
Kellogg, Katherine C.; Gainer, Lindsay A.; Allen, Adrienne S.; O'Sullivan, Tatum; Singer, Sara J.
2017-01-01
Background: Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. Purpose: We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. Methodology/Approach: We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group’s traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. Findings: The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Practice Implications: Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread. PMID:27428788
An Emerging Theoretical Model of Music Therapy Student Development.
Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E
2017-07-01
Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
The Development of a Model of Culturally Responsive Science and Mathematics Teaching
ERIC Educational Resources Information Center
Hernandez, Cecilia M.; Morales, Amanda R.; Shroyer, M. Gail
2013-01-01
This qualitative theoretical study was conducted in response to the current need for an inclusive and comprehensive model to guide the preparation and assessment of teacher candidates for culturally responsive teaching. The process of developing a model of culturally responsive teaching involved three steps: a comprehensive review of the…
Stop Saying No: Start Empowering Copyright Role Models
ERIC Educational Resources Information Center
Disclafani, Carrie Bertling; Hall, Renee
2012-01-01
The Excelsior College Library is turning fearful faculty members into empowered copyright role models. Geared towards institutions operating without a copyright policy or department, this article outlines a three-step process for fostering faculty collaboration surrounding copyright practices: (1) Give faculty and course developers the tools and…
Method for modeling social care processes for national information exchange.
Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit
2012-01-01
Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.
Three-Step Validation of Exercise Behavior Processes of Change in an Adolescent Sample
ERIC Educational Resources Information Center
Rhodes, Ryan E.; Berry, Tanya; Naylor, Patti-Jean; Higgins, S. Joan Wharf
2004-01-01
Though the processes of change are conceived as the core constructs of the transtheoretical model (TTM), few researchers have examined their construct validity in the physical activity domain. Further, only 1 study was designed to investigate the processes of change in an adolescent sample. The purpose of this study was to examine the exercise…
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2013-07-01
The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Machine Learning: A Crucial Tool for Sensor Design
Zhao, Weixiang; Bhushan, Abhinav; Santamaria, Anthony D.; Simon, Melinda G.; Davis, Cristina E.
2009-01-01
Sensors have been widely used for disease diagnosis, environmental quality monitoring, food quality control, industrial process analysis and control, and other related fields. As a key tool for sensor data analysis, machine learning is becoming a core part of novel sensor design. Dividing a complete machine learning process into three steps: data pre-treatment, feature extraction and dimension reduction, and system modeling, this paper provides a review of the methods that are widely used for each step. For each method, the principles and the key issues that affect modeling results are discussed. After reviewing the potential problems in machine learning processes, this paper gives a summary of current algorithms in this field and provides some feasible directions for future studies. PMID:20191110
NASA Astrophysics Data System (ADS)
Vanderborght, Jan; Priesack, Eckart
2017-04-01
The Soil Model Development and Intercomparison Panel (SoilMIP) is an initiative of the International Soil Modeling Consortium. Its mission is to foster the further development of soil models that can predict soil functions and their changes (i) due to soil use and land management and (ii) due to external impacts of climate change and pollution. Since soil functions and soil threats are diverse but linked with each other, the overall aim is to develop holistic models that represent the key functions of the soil system and the links between them. These models should be scaled up and integrated in terrestrial system models that describe the feedbacks between processes in the soil and the other terrestrial compartments. We propose and illustrate a few steps that could be taken to achieve these goals. A first step is the development of scenarios that compare simulations by models that predict the same or different soil services. Scenarios can be considered at three different levels of comparisons: scenarios that compare the numerics (accuracy but also speed) of models, scenarios that compare the effect of differences in process descriptions, and scenarios that compare simulations with experimental data. A second step involves the derivation of metrics or summary statistics that effectively compare model simulations and disentangle parameterization from model concept differences. These metrics can be used to evaluate how more complex model simulations can be represented by simpler models using an appropriate parameterization. A third step relates to the parameterization of models. Application of simulation models implies that appropriate model parameters have to be defined for a range of environmental conditions and locations. Spatial modelling approaches are used to derive parameter distributions. Considering that soils and their properties emerge from the interaction between physical, chemical and biological processes, the combination of spatial models with process models would lead to consistent parameter distributions correlations and could potentially represent self-organizing processes in soils and landscapes.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
NASA Technical Reports Server (NTRS)
Williams, R. M.; Ryan, M. A.; Saipetch, C.; LeDuc, H. G.
1996-01-01
The exchange current observed at porous metal electrodes on sodium or potassium beta -alumina solid electrolytes in alkali metal vapor is quantitatively modeled with a multi-step process with good agreement with experimental results.
Demonstration of the feasibility of automated silicon solar cell fabrication
NASA Technical Reports Server (NTRS)
Taylor, W. E.; Schwartz, F. M.
1975-01-01
A study effort was undertaken to determine the process, steps and design requirements of an automated silicon solar cell production facility. Identification of the key process steps was made and a laboratory model was conceptually designed to demonstrate the feasibility of automating the silicon solar cell fabrication process. A detailed laboratory model was designed to demonstrate those functions most critical to the question of solar cell fabrication process automating feasibility. The study and conceptual design have established the technical feasibility of automating the solar cell manufacturing process to produce low cost solar cells with improved performance. Estimates predict an automated process throughput of 21,973 kilograms of silicon a year on a three shift 49-week basis, producing 4,747,000 hexagonal cells (38mm/side), a total of 3,373 kilowatts at an estimated manufacturing cost of $0.866 per cell or $1.22 per watt.
NASA Astrophysics Data System (ADS)
Boyle, Liza
Dust accumulation, or soiling, on solar energy harvesting systems can cause significant losses that reduce the power output of the system, increase pay-back time of the system, and reduce confidence in solar energy overall. Developing a method of estimating soiling losses could greatly improve estimates of solar energy system outputs, greatly improve operation and maintenance of solar systems, and improve siting of solar energy systems. This dissertation aims to develop a soiling model by collecting ambient soiling data as well as other environmental data and fitting a model to these data. In general a process-level approach is taken to estimating soiling. First a comparison is made between mass of deposited particulates and transmission loss. Transmission loss is the reduction in light that a solar system would see due to soiling, and mass accumulation represents the level of soiling in the system. This experiment is first conducted at two sites in the Front Range of Colorado and then expanded to three additional sites. Second mass accumulation is examined as a function of airborne particulate matter (PM) concentrations, airborne size distributions, and meteorological data. In depth analysis of this process step is done at the first two sites in Colorado, and a more general analysis is done at the three additional sites. This step is identified as less understood step, but with results still allowing for a general soiling model to be developed. Third these two process steps are combined, and spatial variability of these steps are examined. The three additional sites (an additional site in the Front Range of Colorado, a site in Albuquerque New Mexico, and a site in Cocoa Florida) represent a much more spatially and climatically diverse set of locations than the original two sites and provide a much broader sample space in which to develop the combined soiling model. Finally a few additional parameters, precipitation, micro-meteorology, and some sampling artifacts, are cursorily examined. This is to provide a broader context for these results and to help future researchers in understanding the strengths and weaknesses of this dissertation and the results presented within.
D. Todd Jones-Farrand; Todd M. Fearer; Wayne E. Thogmartin; Frank R. Thompson; Mark D. Nelson; John M. Tirpak
2011-01-01
Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and...
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
Development of a definition, classification system, and model for cultural geology
NASA Astrophysics Data System (ADS)
Mitchell, Lloyd W., III
The concept for this study is based upon a personal interest by the author, an American Indian, in promoting cultural perspectives in undergraduate college teaching and learning environments. Most academicians recognize that merged fields can enhance undergraduate curricula. However, conflict may occur when instructors attempt to merge social science fields such as history or philosophy with geoscience fields such as mining and geomorphology. For example, ideologies of Earth structures derived from scientific methodologies may conflict with historical and spiritual understandings of Earth structures held by American Indians. Specifically, this study addresses the problem of how to combine cultural studies with the geosciences into a new merged academic discipline called cultural geology. This study further attempts to develop the merged field of cultural geology using an approach consisting of three research foci: a definition, a classification system, and a model. Literature reviews were conducted for all three foci. Additionally, to better understand merged fields, a literature review was conducted specifically for academic fields that merged social and physical sciences. Methodologies concentrated on the three research foci: definition, classification system, and model. The definition was derived via a two-step process. The first step, developing keyword hierarchical ranking structures, was followed by creating and analyzing semantic word meaning lists. The classification system was developed by reviewing 102 classification systems and incorporating selected components into a system framework. The cultural geology model was created also utilizing a two-step process. A literature review of scientific models was conducted. Then, the definition and classification system were incorporated into a model felt to reflect the realm of cultural geology. A course syllabus was then developed that incorporated the resulting definition, classification system, and model. This study concludes that cultural geology can be introduced as a merged discipline by using a three-foci framework consisting of a definition, classification system, and model. Additionally, this study reveals that cultural beliefs, attitudes, and behaviors, can be incorporated into a geology course during the curriculum development process, using an approach known as 'learner-centered'. This study further concludes that cultural beliefs, derived from class members, are an important source of curriculum materials.
ERIC Educational Resources Information Center
Reese, Simon R.
2015-01-01
This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…
Functional-to-form mapping for assembly design automation
NASA Astrophysics Data System (ADS)
Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.
2017-11-01
Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.
Carol Clausen
2004-01-01
In this study, three possible improvements to a remediation process for chromated-copper-arsenate (CCA) treated wood were evaluated. The process involves two steps: oxalic acid extraction of wood fiber followed by bacterial culture with Bacillus licheniformis CC01. The three potential improvements to the oxalic acid extraction step were (1) reusing oxalic acid for...
Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture.
Kou, Wen; Kou, Shaoquan; Liu, Hongyuan; Sjögren, Göran
2007-08-01
The main objectives were to examine the fracture mechanism and process of a ceramic fixed partial denture (FPD) framework under simulated mechanical loading using a recently developed numerical modeling code, the R-T(2D) code, and also to evaluate the suitability of R-T(2D) code as a tool for this purpose. Using the recently developed R-T(2D) code the fracture mechanism and process of a 3U yttria-tetragonal zirconia polycrystal ceramic (Y-TZP) FPD framework was simulated under static loading. In addition, the fracture pattern obtained using the numerical simulation was compared with the fracture pattern obtained in a previous laboratory test. The result revealed that the framework fracture pattern obtained using the numerical simulation agreed with that observed in a previous laboratory test. Quasi-photoelastic stress fringe pattern and acoustic emission showed that the fracture mechanism was tensile failure and that the crack started at the lower boundary of the framework. The fracture process could be followed both in step-by-step and step-in-step. Based on the findings in the current study, the R-T(2D) code seems suitable for use as a complement to other tests and clinical observations in studying stress distribution, fracture mechanism and fracture processes in ceramic FPD frameworks.
Do, Thao Thi; Van Hooghten, Rob; Van den Mooter, Guy
2017-04-15
The aggregation of three different cyclodextrins (CDs): 2-hydroxypropyl-β-cyclodextrin (HP-β-CD), 2-hydroxypropyl-γ-cyclodextrin (HP-γ-CD) and sulfobutylether-β-cyclodextrin (SBE-β-CD) was studied. The critical aggregation concentration (cac) of these three CDs is quite similar and is situated at ca. 2% (m/v). There was only a small difference in the cac values determined by DLS and 1 H NMR. DLS measurements revealed that CDs in solution have three size populations wherein one of them is that of a single CD molecule. The size of aggregates determined by TEM appears to be similar to the size of the aggregates in the second size distribution determined by DLS. Isodesmic and K 2 -K self-assembly models were used for studying the aggregation process of HP-β-CD, HP-γ-CD and SBE-β-CD. The results showed that the aggregation process of these CDs is a cooperative one, where the first step of aggregation is less favorable than the next steps. The determined thermodynamic parameters showed that the aggregation process of all three CDs is spontaneous and exothermic and it is driven by an increase of the entropy of the environment. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Valle-Hernández, Julio; Romero-Paredes, Hernando; Pacheco-Reyes, Alejandro
2017-06-01
In this paper the simulation of the steam hydrolysis for hydrogen production through the decomposition of cerium oxide is presented. The thermochemical cycle for hydrogen production consists of the endothermic reduction of CeO2 to lower-valence cerium oxide, at high temperature, where concentrated solar energy is used as a source of heat; and of the subsequent steam hydrolysis of the resulting cerium oxide to produce hydrogen. The modeling of endothermic reduction step was presented at the Solar Paces 2015. This work shows the modeling of the exothermic step; the hydrolysis of the cerium oxide (III) to form H2 and the corresponding initial cerium oxide made at lower temperature inside the solar reactor. For this model, three sections of the pipe where the reaction occurs were considered; the steam water inlet, the porous medium and the hydrogen outlet produced. The mathematical model describes the fluid mechanics; mass and energy transfer occurring therein inside the tungsten pipe. Thermochemical process model was simulated in CFD. The results show a temperature distribution in the solar reaction pipe and allow obtaining the fluid dynamics and the heat transfer within the pipe. This work is part of the project "Solar Fuels and Industrial Processes" from the Mexican Center for Innovation in Solar Energy (CEMIE-Sol).
Analysis, design, fabrication, and performance of three-dimensional braided composites
NASA Astrophysics Data System (ADS)
Kostar, Timothy D.
1998-11-01
Cartesian 3-D (track and column) braiding as a method of composite preforming has been investigated. A complete analysis of the process was conducted to understand the limitations and potentials of the process. Knowledge of the process was enhanced through development of a computer simulation, and it was discovered that individual control of each track and column and multiple-step braid cycles greatly increases possible braid architectures. Derived geometric constraints coupled with the fundamental principles of Cartesian braiding resulted in an algorithm to optimize preform geometry in relation to processing parameters. The design of complex and unusual 3-D braids was investigated in three parts: grouping of yarns to form hybrid composites via an iterative simulation; design of composite cross-sectional shape through implementation of the Universal Method; and a computer algorithm developed to determine the braid plan based on specified cross-sectional shape. Several 3-D braids, which are the result of variations or extensions to Cartesian braiding, are presented. An automated four-step braiding machine with axial yarn insertion has been constructed and used to fabricate two-step, double two-step, four-step, and four-step with axial and transverse yarn insertion braids. A working prototype of a multi-step braiding machine was used to fabricate four-step braids with surrogate material insertion, unique hybrid structures from multiple track and column displacement and multi-step cycles, and complex-shaped structures with constant or varying cross-sections. Braid materials include colored polyester yarn to study the yarn grouping phenomena, Kevlar, glass, and graphite for structural reinforcement, and polystyrene, silicone rubber, and fasteners for surrogate material insertion. A verification study for predicted yarn orientation and volume fraction was conducted, and a topological model of 3-D braids was developed. The solid model utilizes architectural parameters, generated from the process simulation, to determine the composite elastic properties. Methods of preform consolidation are investigated and the results documented. The extent of yarn deformation (packing) resulting from preform consolidation was investigated through cross-sectional micrographs. The fiber volume fraction of select hybrid composites was measured and representative unit cells are suggested. Finally, a comparison study of the elastic performance of Kevlar/epoxy and carbon/Kevlar hybrid composites was conducted.
Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.
2013-01-01
Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728
Graphical modeling and query language for hospitals.
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
2013-01-01
So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool involving the physicians from several hospitals in Latvia and working with real data from these hospitals. Our third step is to develop an efficient implementation of the query language.
Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes
NASA Astrophysics Data System (ADS)
Octova, A.; Sule, R.
2018-04-01
Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.
Process Simulation of Aluminium Sheet Metal Deep Drawing at Elevated Temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winklhofer, Johannes; Trattnig, Gernot; Lind, Christoph
Lightweight design is essential for an economic and environmentally friendly vehicle. Aluminium sheet metal is well known for its ability to improve the strength to weight ratio of lightweight structures. One disadvantage of aluminium is that it is less formable than steel. Therefore complex part geometries can only be realized by expensive multi-step production processes. One method for overcoming this disadvantage is deep drawing at elevated temperatures. In this way the formability of aluminium sheet metal can be improved significantly, and the number of necessary production steps can thereby be reduced. This paper introduces deep drawing of aluminium sheet metalmore » at elevated temperatures, a corresponding simulation method, a characteristic process and its optimization. The temperature and strain rate dependent material properties of a 5xxx series alloy and their modelling are discussed. A three dimensional thermomechanically coupled finite element deep drawing simulation model and its validation are presented. Based on the validated simulation model an optimised process strategy regarding formability, time and cost is introduced.« less
Study on formation of step bunching on 6H-SiC (0001) surface by kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Yuan; Chen, Xuejiang; Su, Juan
2016-05-01
The formation and evolution of step bunching during step-flow growth of 6H-SiC (0001) surfaces were studied by three-dimensional kinetic Monte Carlo (KMC) method and compared with the analytic model based on the theory of Burton-Cabera-Frank (BCF). In the KMC model the crystal lattice was represented by a structured mesh which fixed the position of atoms and interatomic bonding. The events considered in the model were adatoms adsorption and diffusion on the terrace, and adatoms attachment, detachment and interlayer transport at the step edges. In addition, effects of Ehrlich-Schwoebel (ES) barriers at downward step edges and incorporation barriers at upwards step edges were also considered. In order to obtain more elaborate information for the behavior of atoms in the crystal surface, silicon and carbon atoms were treated as the minimal diffusing species. KMC simulation results showed that multiple-height steps were formed on the vicinal surface oriented toward [ 1 1 bar 00 ] or [ 11 2 bar 0 ] directions. And then the formation mechanism of the step bunching was analyzed. Finally, to further analyze the formation processes of step bunching, a one-dimensional BCF analytic model with ES and incorporation barriers was used, and then it was solved numerically. In the BCF model, the periodic boundary conditions (PBC) were applied, and the parameters were corresponded to those used in the KMC model. The evolution character of step bunching was consistent with the results obtained by KMC simulation.
Reflective Process in Play Therapy: A Practical Model for Supervising Counseling Students
ERIC Educational Resources Information Center
Allen, Virginia B.; Folger, Wendy A.; Pehrsson, Dale-Elizabeth
2007-01-01
Counselor educators and other supervisors, who work with graduate student counseling interns utilizing Play Therapy, should be educated, grounded, and trained in theory, supervision, and techniques specific to Play Therapy. Unfortunately, this is often not the case. Therefore, a three step model was created to assist those who do not have specific…
Computational mate choice: theory and empirical evidence.
Castellano, Sergio; Cadeddu, Giorgia; Cermelli, Paolo
2012-06-01
The present review is based on the thesis that mate choice results from information-processing mechanisms governed by computational rules and that, to understand how females choose their mates, we should identify which are the sources of information and how they are used to make decisions. We describe mate choice as a three-step computational process and for each step we present theories and review empirical evidence. The first step is a perceptual process. It describes the acquisition of evidence, that is, how females use multiple cues and signals to assign an attractiveness value to prospective mates (the preference function hypothesis). The second step is a decisional process. It describes the construction of the decision variable (DV), which integrates evidence (private information by direct assessment), priors (public information), and value (perceived utility) of prospective mates into a quantity that is used by a decision rule (DR) to produce a choice. We make the assumption that females are optimal Bayesian decision makers and we derive a formal model of DV that can explain the effects of preference functions, mate copying, social context, and females' state and condition on the patterns of mate choice. The third step of mating decision is a deliberative process that depends on the DRs. We identify two main categories of DRs (absolute and comparative rules), and review the normative models of mate sampling tactics associated to them. We highlight the limits of the normative approach and present a class of computational models (sequential-sampling models) that are based on the assumption that DVs accumulate noisy evidence over time until a decision threshold is reached. These models force us to rethink the dichotomy between comparative and absolute decision rules, between discrimination and recognition, and even between rational and irrational choice. Since they have a robust biological basis, we think they may represent a useful theoretical tool for behavioural ecologist interested in integrating proximate and ultimate causes of mate choice. Copyright © 2012 Elsevier B.V. All rights reserved.
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
NASA Astrophysics Data System (ADS)
Kaldunski, Pawel; Kukielka, Leon; Patyk, Radoslaw; Kulakowska, Agnieszka; Bohdal, Lukasz; Chodor, Jaroslaw; Kukielka, Krzysztof
2018-05-01
In this paper, the numerical analysis and computer simulation of deep drawing process has been presented. The incremental model of the process in updated Lagrangian formulation with the regard of the geometrical and physical nonlinearity has been evaluated by variational and the finite element methods. The Frederic Barlat's model taking into consideration the anisotropy of materials in three main and six tangents directions has been used. The work out application in Ansys/Ls-Dyna program allows complex step by step analysis and prognoses: the shape, dimensions and state stress and strains of drawpiece. The paper presents the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape, which includes: height, sheet thickness and maximum drawing force. The important factors determining the proper formation of drawpiece and the ways of their determination have been described.
Modeling the process leading to abortion: an application to French survey data.
Rossier, Clémentine; Michelot, François; Bajos, Nathalie
2007-09-01
In this study, we model women's recourse to induced abortion as resulting from a process that starts with sexual intercourse and contraceptive use (or nonuse), continues with the occurrence of an unintended pregnancy, and ends with the woman's decision to terminate the pregnancy and her access to abortion services. Our model includes two often-neglected proximate determinants of abortion: sexual practices and access to abortion services. We relate three sociodemographic characteristics--women's educational level, their relationship status, and their age--step by step to the stages of the abortion process. We apply our framework using data from the COCON survey, a national survey on reproductive health conducted in France in 2000. Our model shows that sociodemographic variables may have opposite impacts as the abortion process unfolds. For example, women's educational level can be positively linked to the probability of practicing contraception but negatively linked to the propensity to carry the unintended pregnancy to term. This conceptual framework brings together knowledge that is currently dispersed in the literature and helps to identify the source of abortion-rate differentials.
Three-dimensional modelling of slope stability using the Local Factor of Safety concept
NASA Astrophysics Data System (ADS)
Moradi, Shirin; Huisman, Sander; Beck, Martin; Vereecken, Harry; Class, Holger
2017-04-01
Slope stability is governed by coupled hydrological and mechanical processes. The slope stability depends on the effective stress, which in turn depends on the weight of the soil and the matrix potential. Therefore, changes in water content and matrix potential associated with infiltration will affect slope stability. Most available models describing these coupled hydro-mechanical processes either rely on a one- or two-dimensional representation of hydrological and mechanical properties and processes, which obviously is a strong simplification in many applications. Therefore, the aim of this work is to develop a three-dimensional hydro-mechanical model that is able to capture the effect of spatial and temporal variability of both mechanical and hydrological parameters on slope stability. For this, we rely on DuMux, which is a free and open-source simulator for flow and transport processes in porous media that facilitates coupling of different model approaches and offers flexibility for model development. We use the Richards equation to model unsaturated water flow. The simulated water content and matrix potential distribution is used to calculate the effective stress. We only consider linear elasticity and solve for statically admissible fields of stress and displacement without invoking failure or the redistribution of post-failure stress or displacement. The Local Factor of Safety concept is used to evaluate slope stability in order to overcome some of the main limitations of commonly used methods based on limit equilibrium considerations. In a first step, we compared our model implementation with a 2D benchmark model that was implemented in COMSOL Multiphysics. In a second step, we present in-silico experiments with the newly developed 3D model to show the effect of slope morphology, spatial variability in hydraulic and mechanical material properties, and spatially variable soil depth on simulated slope stability. It is expected that this improved physically-based three-dimensional hydro-mechanical model is able to provide more reliable slope instability predictions in more complex situations.
Formation of three-dimensional fetal myocardial tissue cultures from rat for long-term cultivation.
Just, Lothar; Kürsten, Anne; Borth-Bruhns, Thomas; Lindenmaier, Werner; Rohde, Manfred; Dittmar, Kurt; Bader, Augustinus
2006-08-01
Three-dimensional cardiomyocyte cultures offer new possibilities for the analysis of cardiac cell differentiation, spatial cellular arrangement, and time-specific gene expression in a tissue-like environment. We present a new method for generating homogenous and robust cardiomyocyte tissue cultures with good long-term viability. Ventricular heart cells prepared from fetal rats at embryonic day 13 were cultured in a scaffold-free two-step process. To optimize the cell culture model, several digestion protocols and culture conditions were tested. After digestion of fetal cardiac ventricles, the resultant cell suspension of isolated cardiocytes was shaken to initialize cell aggregate formation. In the second step, these three-dimensional cell aggregates were transferred onto a microporous membrane to allow further microstructure formation. Autonomously beating cultures possessed more than 25 cell layers and a homogenous distribution of cardiomyocytes without central necrosis after 8 weeks in vitro. The cardiomyocytes showed contractile elements, desmosomes, and gap junctions analyzed by immunohistochemistry and electron microscopy. The beat frequency could be modulated by adrenergic agonist and antagonist. Adenoviral green fluorescent protein transfer into cardiomyocytes was possible and highly effective. This three-dimensional tissue model proved to be useful for studying cell-cell interactions and cell differentiation processes in a three-dimensional cell arrangement.
Ingham, Richard J; Battilocchio, Claudio; Fitzpatrick, Daniel E; Sliwinski, Eric; Hawkins, Joel M; Ley, Steven V
2015-01-01
Performing reactions in flow can offer major advantages over batch methods. However, laboratory flow chemistry processes are currently often limited to single steps or short sequences due to the complexity involved with operating a multi-step process. Using new modular components for downstream processing, coupled with control technologies, more advanced multi-step flow sequences can be realized. These tools are applied to the synthesis of 2-aminoadamantane-2-carboxylic acid. A system comprising three chemistry steps and three workup steps was developed, having sufficient autonomy and self-regulation to be managed by a single operator. PMID:25377747
NASA Astrophysics Data System (ADS)
Prete, Antonio Del; Franchi, Rodolfo; Antermite, Fabrizio; Donatiello, Iolanda
2018-05-01
Residual stresses appear in a component as a consequence of thermo-mechanical processes (e.g. ring rolling process) casting and heat treatments. When machining these kinds of components, distortions arise due to the redistribution of residual stresses due to the foregoing process history inside the material. If distortions are excessive, they can lead to a large number of scrap parts. Since dimensional accuracy can affect directly the engines efficiency, the dimensional control for aerospace components is a non-trivial issue. In this paper, the problem related to the distortions of large thin walled aeroengines components in nickel superalloys has been addressed. In order to estimate distortions on inner diameters after internal turning operations, a 3D Finite Element Method (FEM) analysis has been developed on a real industrial test case. All the process history, has been taken into account by developing FEM models of ring rolling process and heat treatments. Three different strategies of ring rolling process have been studied and the combination of related parameters which allows to obtain the best dimensional accuracy has been found. Furthermore, grain size evolution and recrystallization phenomena during manufacturing process has been numerically investigated using a semi empirical Johnson-Mehl-Avrami-Kohnogorov (JMAK) model. The volume subtractions have been simulated by boolean trimming: a one step and a multi step analysis have been performed. The multi-step procedure has allowed to choose the best material removal sequence in order to reduce machining distortions.
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
ERIC Educational Resources Information Center
Shin, Dong Sun; Jang, Hae Gwon; Hwang, Sung Bae; Har, Dong-Hwan; Moon, Young Lae; Chung, Min Suk
2013-01-01
In the Visible Korean project, serially sectioned images of the pelvis were made from a female cadaver. Outlines of significant structures in the sectioned images were drawn and stacked to build surface models. To improve the accessibility and informational content of these data, a five-step process was designed and implemented. First, 154 pelvic…
The contribution of temporary storage and executive processes to category learning.
Wang, Tengfei; Ren, Xuezhu; Schweizer, Karl
2015-09-01
Three distinctly different working memory processes, temporary storage, mental shifting and inhibition, were proposed to account for individual differences in category learning. A sample of 213 participants completed a classic category learning task and two working memory tasks that were experimentally manipulated for tapping specific working memory processes. Fixed-links models were used to decompose data of the category learning task into two independent components representing basic performance and improvement in performance in category learning. Processes of working memory were also represented by fixed-links models. In a next step the three working memory processes were linked to components of category learning. Results from modeling analyses indicated that temporary storage had a significant effect on basic performance and shifting had a moderate effect on improvement in performance. In contrast, inhibition showed no effect on any component of the category learning task. These results suggest that temporary storage and the shifting process play different roles in the course of acquiring new categories. Copyright © 2015 Elsevier B.V. All rights reserved.
Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fenando
2003-01-01
In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov
2014-01-01
Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.
A physiologically-based pharmacokinetic (PBPK) model is being developed to estimate the dosimetry of toluene in rats inhaling the VOC under various experimental conditions. The effects of physical activity are currently being estimated utilizing a three-step process. First, we d...
Park, Jae-Min; Jang, Se Jin; Lee, Sang-Ick; Lee, Won-Jun
2018-03-14
We designed cyclosilazane-type silicon precursors and proposed a three-step plasma-enhanced atomic layer deposition (PEALD) process to prepare silicon nitride films with high quality and excellent step coverage. The cyclosilazane-type precursor, 1,3-di-isopropylamino-2,4-dimethylcyclosilazane (CSN-2), has a closed ring structure for good thermal stability and high reactivity. CSN-2 showed thermal stability up to 450 °C and a sufficient vapor pressure of 4 Torr at 60 °C. The energy for the chemisorption of CSN-2 on the undercoordinated silicon nitride surface as calculated by density functional theory method was -7.38 eV. The PEALD process window was between 200 and 500 °C, with a growth rate of 0.43 Å/cycle. The best film quality was obtained at 500 °C, with hydrogen impurity of ∼7 atom %, oxygen impurity less than 2 atom %, low wet etching rate, and excellent step coverage of ∼95%. At 300 °C and lower temperatures, the wet etching rate was high especially at the lower sidewall of the trench pattern. We introduced the three-step PEALD process to improve the film quality and the step coverage on the lower sidewall. The sequence of the three-step PEALD process consists of the CSN-2 feeding step, the NH 3 /N 2 plasma step, and the N 2 plasma step. The H radicals in NH 3 /N 2 plasma efficiently remove the ligands from the precursor, and the N 2 plasma after the NH 3 plasma removes the surface hydrogen atoms to activate the adsorption of the precursor. The films deposited at 300 °C using the novel precursor and the three-step PEALD process showed a significantly improved step coverage of ∼95% and an excellent wet etching resistance at the lower sidewall, which is only twice as high as that of the blanket film prepared by low-pressure chemical vapor deposition.
Sumner, Walton; Xu, Jin Zhong
2002-01-01
The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.
McGovern, Eimear; Kelleher, Eoin; Snow, Aisling; Walsh, Kevin; Gadallah, Bassem; Kutty, Shelby; Redmond, John M; McMahon, Colin J
2017-09-01
In recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.
Guiding gate-etch process development using 3D surface reaction modeling for 7nm and beyond
NASA Astrophysics Data System (ADS)
Dunn, Derren; Sporre, John R.; Deshpande, Vaibhav; Oulmane, Mohamed; Gull, Ronald; Ventzek, Peter; Ranjan, Alok
2017-03-01
Increasingly, advanced process nodes such as 7nm (N7) are fundamentally 3D and require stringent control of critical dimensions over high aspect ratio features. Process integration in these nodes requires a deep understanding of complex physical mechanisms to control critical dimensions from lithography through final etch. Polysilicon gate etch processes are critical steps in several device architectures for advanced nodes that rely on self-aligned patterning approaches to gate definition. These processes are required to meet several key metrics: (a) vertical etch profiles over high aspect ratios; (b) clean gate sidewalls free of etch process residue; (c) minimal erosion of liner oxide films protecting key architectural elements such as fins; and (e) residue free corners at gate interfaces with critical device elements. In this study, we explore how hybrid modeling approaches can be used to model a multi-step finFET polysilicon gate etch process. Initial parts of the patterning process through hardmask assembly are modeled using process emulation. Important aspects of gate definition are then modeled using a particle Monte Carlo (PMC) feature scale model that incorporates surface chemical reactions.1 When necessary, species and energy flux inputs to the PMC model are derived from simulations of the etch chamber. The modeled polysilicon gate etch process consists of several steps including a hard mask breakthrough step (BT), main feature etch steps (ME), and over-etch steps (OE) that control gate profiles at the gate fin interface. An additional constraint on this etch flow is that fin spacer oxides are left intact after final profile tuning steps. A natural optimization required from these processes is to maximize vertical gate profiles while minimizing erosion of fin spacer films.2
The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns
NASA Astrophysics Data System (ADS)
Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo
Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.
Wang, Alice; Lewus, Rachael; Rathore, Anurag S
2006-05-05
Recovery of therapeutic protein from high cell density yeast fermentations at commercial scale is a challenging task. In this study, we investigate and compare three different harvest approaches, namely centrifugation followed by depth filtration, centrifugation followed by filter-aid enhanced depth filtration, and microfiltration. This is achieved by presenting a case study involving recovery of a therapeutic protein from Pichia pastoris fermentation broth. The focus of this study is on performance of the depth filtration and the microfiltration steps. The experimental data has been fitted to the conventional models for cake filtration to evaluate specific cake resistance and cake compressibility. In the case of microfiltration, the experimental data agrees well with flux predicted by shear induced diffusion model. It is shown that, under optimal conditions, all three options can deliver the desired product recovery ( >80%), harvest time ( <15 h including sequential concentration/diafiltration step), and clarification ( <6 NTU). However, the three options differ in terms of process development time required, capital cost, consumable cost, ease of scale-ability and process robustness. It is recommended that these be kept under consideration when making a final decision on a harvesting approach.
Performance improvement CME for quality: challenges inherent to the process.
Vakani, Farhan Saeed; O'Beirne, Ronan
2015-01-01
The purpose of this paper is to discuss the perspective debates upon the real-time challenges for a three-staged Performance Improvement Continuing Medical Education (PI-CME) model, an innovative and potential approach for future CME, to inform providers to think, prepare and to act proactively. In this discussion, the challenges associated for adopting the American Medical Association's three-staged PI-CME model are reported. Not many institutions in USA are using a three-staged performance improvement model and then customizing it to their own healthcare context for the specific targeted audience. They integrate traditional CME methods with performance and quality initiatives, and linking with CME credits. Overall the US health system is interested in a structured PI-CME model with the potential to improve physicians practicing behaviors. Knowing the dearth of evidence for applying this structured performance improvement methodology into the design of CME activities, and the lack of clarity on challenges inherent to the process that learners and providers encounter. This paper establishes all-important first step to render the set of challenges for a three-staged PI-CME model.
AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku
2014-05-27
The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.
Evaluation of TOPLATS on three Mediterranean catchments
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-08-01
Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.
Mashari, Azad; Montealegre-Gallegos, Mario; Knio, Ziyad; Yeh, Lu; Jeganathan, Jelliffe; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2016-12-01
Three-dimensional (3D) printing is a rapidly evolving technology with several potential applications in the diagnosis and management of cardiac disease. Recently, 3D printing (i.e. rapid prototyping) derived from 3D transesophageal echocardiography (TEE) has become possible. Due to the multiple steps involved and the specific equipment required for each step, it might be difficult to start implementing echocardiography-derived 3D printing in a clinical setting. In this review, we provide an overview of this process, including its logistics and organization of tools and materials, 3D TEE image acquisition strategies, data export, format conversion, segmentation, and printing. Generation of patient-specific models of cardiac anatomy from echocardiographic data is a feasible, practical application of 3D printing technology. © 2016 The authors.
Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta
2014-01-01
Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.
NASA Technical Reports Server (NTRS)
Milligan, James R.; Dutton, James E.
1993-01-01
In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.
Model-Based Engineering Design for Trade Space Exploration throughout the Design Cycle
NASA Technical Reports Server (NTRS)
Lamassoure, Elisabeth S.; Wall, Stephen D.; Easter, Robert W.
2004-01-01
This paper presents ongoing work to standardize model-based system engineering as a complement to point design development in the conceptual design phase of deep space missions. It summarizes two first steps towards practical application of this capability within the framework of concurrent engineering design teams and their customers. The first step is standard generation of system sensitivities models as the output of concurrent engineering design sessions, representing the local trade space around a point design. A review of the chosen model development process, and the results of three case study examples, demonstrate that a simple update to the concurrent engineering design process can easily capture sensitivities to key requirements. It can serve as a valuable tool to analyze design drivers and uncover breakpoints in the design. The second step is development of rough-order- of-magnitude, broad-range-of-validity design models for rapid exploration of the trade space, before selection of a point design. At least one case study demonstrated the feasibility to generate such models in a concurrent engineering session. The experiment indicated that such a capability could yield valid system-level conclusions for a trade space composed of understood elements. Ongoing efforts are assessing the practicality of developing end-to-end system-level design models for use before even convening the first concurrent engineering session, starting with modeling an end-to-end Mars architecture.
Toth, Tibor Istvan; Grabowska, Martyna; Schmidt, Joachim; Büschges, Ansgar; Daun-Gruhn, Silvia
2013-01-01
Stop and start of stepping are two basic actions of the musculo-skeletal system of a leg. Although they are basic phenomena, they require the coordinated activities of the leg muscles. However, little is known of the details of how these activities are generated by the interactions between the local neuronal networks controlling the fast and slow muscle fibres at the individual leg joints. In the present work, we aim at uncovering some of those details using a suitable neuro-mechanical model. It is an extension of the model in the accompanying paper and now includes all three antagonistic muscle pairs of the main joints of an insect leg, together with their dedicated neuronal control, as well as common inhibitory motoneurons and the residual stiffness of the slow muscles. This model enabled us to study putative processes of intra-leg coordination during stop and start of stepping. We also made use of the effects of sensory signals encoding the position and velocity of the leg joints. Where experimental observations are available, the corresponding simulation results are in good agreement with them. Our model makes detailed predictions as to the coordination processes of the individual muscle systems both at stop and start of stepping. In particular, it reveals a possible role of the slow muscle fibres at stop in accelerating the convergence of the leg to its steady-state position. These findings lend our model physiological relevance and can therefore be used to elucidate details of the stop and start of stepping in insects, and perhaps in other animals, too. PMID:24278108
How To Create and Conduct a Memory Enhancement Program.
ERIC Educational Resources Information Center
Meyer, Genevieve R.; Ober-Reynolds, Sharman
This report describes Memory Enhancement Group workshops which have been conducted at the Senior Health and Peer Counseling Center in Santa Monica, California and gives basic data regarding outcomes of the workshops. It provides a model of memory as a three-step process of registration or becoming aware, consolidation, and retrieval. It presents…
NASA Astrophysics Data System (ADS)
Yang, Jun; Wang, Ze-Xin; Lu, Sheng; Lv, Wei-gang; Jiang, Xi-zhi; Sun, Lei
2017-03-01
The micro-arc oxidation process was conducted on ZK60 Mg alloy under two and three steps voltage-increasing modes by DC pulse electrical source. The effect of each mode on current-time responses during MAO process and the coating characteristic were analysed and discussed systematically. The microstructure, thickness and corrosion resistance of MAO coatings were evaluated by scanning electron microscopy (SEM), energy disperse spectroscopy (EDS), microscope with super-depth of field and electrochemical impedance spectroscopy (EIS). The results indicate that two and three steps voltage-increasing modes can improve weak spark discharges with insufficient breakdown strength in later period during the MAO process. Due to higher value of voltage and voltage increment, the coating with maximum thickness of about 20.20μm formed under two steps voltage-increasing mode shows the best corrosion resistance. In addition, the coating fabricated under three steps voltage-increasing mode shows a smoother coating with better corrosion resistance due to the lower amplitude of voltage-increasing.
3D road marking reconstruction from street-level calibrated stereo pairs
NASA Astrophysics Data System (ADS)
Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier
This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.
FEA Simulation of Free-Bending - a Preforming Step in the Hydroforming Process Chain
NASA Astrophysics Data System (ADS)
Beulich, N.; Craighero, P.; Volk, W.
2017-09-01
High-strength steel and aluminum alloys are essential for developing innovative, lightly-weighted space frame concepts. The intended design is built from car body parts with high geometrical complexity and reduced material-thickness. Over the past few years, many complex car body parts have been produced using hydroforming. To increase the accuracy of hydroforming in relation to prospective car concepts, the virtual manufacturing of forming becomes more important. As a part of process digitalization, it is necessary to develop a simulation model for the hydroforming process chain. The preforming of longitudinal welded tubes is therefore implemented by the use of three-dimensional free-bending. This technique is able to reproduce complex deflection curves in combination with innovative low-thickness material design for hydroforming processes. As a first step to the complete process simulation, the content of this paper deals with the development of a finite element simulation model for the free-bending process with 6 degrees of freedom. A mandrel built from spherical segments connected by a steel rope is located inside of the tube to prevent geometrical instability. Critical parameters for the result of the bending process are therefore evaluated and optimized. The simulation model is verified by surface measurements of a two-dimensional bending test.
Organic thin film transistor with a simplified planar structure
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yu, Jungsheng; Zhong, Jian; Jiang, Yadong
2009-05-01
Organic thin film transistor (OTFT) with a simplified planar structure is described. The gate electrode and the source/drain electrodes of OTFT are processed in one planar structure. And these three electrodes are deposited on the glass substrate by DC sputtering technology using Cr/Ni target. Then the electrode layouts of different width length ratio are made by photolithography technology at the same time. Only one step of deposition and one step of photolithography is needed while conventional process takes at least two steps of deposition and two steps of photolithography. Metal is first prepared on the other side of glass substrate and electrode is formed by photolithography. Then source/drain electrode is prepared by deposition and photolithography on the side with the insulation layer. Compared to conventional process of OTFTs, the process in this work is simplified. After three electrodes prepared, the insulation layer is made by spin coating method. The organic material of polyimide is used as the insulation layer. A small molecular material of pentacene is evaporated on the insulation layer using vacuum deposition as the active layer. The process of OTFTs needs only three steps totally. A semi-auto probe stage is used to connect the three electrodes and the probe of the test instrument. A charge carrier mobility of 0.3 cm2 /V s, is obtained from OTFTs on glass substrates with and on/off current ratio of 105. The OTFTs with the planar structure using simplified process can simplify the device process and reduce the fabrication cost.
Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu
2016-01-01
A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.
Designing a model of patient tracking system for natural disaster in Iran
Tavakoli, Nahid; Yarmohammadian, Mohammad H.; Safdari, Reza; Keyvanara, Mahmoud
2017-01-01
CONTEXT: Disaster patient tracking consists of identifying and registering patients, recording data on their medical conditions, settings priorities for evacuation of scene, locating the patients from scene to health care centers and then till completion of treatment and discharge. AIM: The aim of this study was to design a model of patient tracking system for natural disaster in Iran. MATERIALS AND METHODS: This applied study was conducted in two steps in 2016. First, data on disaster patient tracking systems used in selected countries were collected from library-printed and electronic references and then compared. Next, a preliminary model of disaster patient tracking system was provided using these systems and validated by Delphi technique and focus group. The data of the first step were analyzed by content analysis and those of the second step by descriptive statistics. RESULTS: Analysis of the comments of key information persons in three Delphi rounds, consisting of national experts, yielded three themes, i.e., content, function, and technology, ten subthemes, and 127 components, with consensus rate of over 75%, to provide a disaster patient tracking system for Iran. CONCLUSION: In Iran, there is no comprehensive process to manage the data on disaster patients. Offering a patient tracking system can be considered a humanitarian and effective measure to promote the process of identifying, caring for, evacuating, and transferring patients as well as documenting and following up their medical and location conditions from scene till completion of the treatment. PMID:28852666
PRIMO: An Interactive Homology Modeling Pipeline.
Hatherley, Rowan; Brown, David K; Glenister, Michael; Tastan Bishop, Özlem
2016-01-01
The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO's automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/.
PRIMO: An Interactive Homology Modeling Pipeline
Glenister, Michael
2016-01-01
The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO’s automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/. PMID:27855192
Shawyer, Frances; Enticott, Joanne C; Brophy, Lisa; Bruxner, Annie; Fossey, Ellie; Inder, Brett; Julian, John; Kakuma, Ritsuko; Weller, Penelope; Wilson-Evered, Elisabeth; Edan, Vrinda; Slade, Mike; Meadows, Graham N
2017-05-08
Recovery features strongly in Australian mental health policy; however, evidence is limited for the efficacy of recovery-oriented practice at the service level. This paper describes the Principles Unite Local Services Assisting Recovery (PULSAR) Specialist Care trial protocol for a recovery-oriented practice training intervention delivered to specialist mental health services staff. The primary aim is to evaluate whether adult consumers accessing services where staff have received the intervention report superior recovery outcomes compared to adult consumers accessing services where staff have not yet received the intervention. A qualitative sub-study aims to examine staff and consumer views on implementing recovery-oriented practice. A process evaluation sub-study aims to articulate important explanatory variables affecting the interventions rollout and outcomes. The mixed methods design incorporates a two-step stepped-wedge cluster randomized controlled trial (cRCT) examining cross-sectional data from three phases, and nested qualitative and process evaluation sub-studies. Participating specialist mental health care services in Melbourne, Victoria are divided into 14 clusters with half randomly allocated to receive the staff training in year one and half in year two. Research participants are consumers aged 18-75 years who attended the cluster within a previous three-month period either at baseline, 12 (step 1) or 24 months (step 2). In the two nested sub-studies, participation extends to cluster staff. The primary outcome is the Questionnaire about the Process of Recovery collected from 756 consumers (252 each at baseline, step 1, step 2). Secondary and other outcomes measuring well-being, service satisfaction and health economic impact are collected from a subset of 252 consumers (63 at baseline; 126 at step 1; 63 at step 2) via interviews. Interview-based longitudinal data are also collected 12 months apart from 88 consumers with a psychotic disorder diagnosis (44 at baseline, step 1; 44 at step 1, step 2). cRCT data will be analyzed using multilevel mixed-effects modelling to account for clustering and some repeated measures, supplemented by thematic analysis of qualitative interview data. The process evaluation will draw on qualitative, quantitative and documentary data. Findings will provide an evidence-base for the continued transformation of Australian mental health service frameworks toward recovery. Australian and New Zealand Clinical Trial Registry: ACTRN12614000957695 . Date registered: 8 September 2014.
Dosta, J; Galí, A; Benabdallah El-Hadj, T; Macé, S; Mata-Alvarez, J
2007-08-01
The aim of this study was the operation and model description of a sequencing batch reactor (SBR) for biological nitrogen removal (BNR) from a reject water (800-900 mg NH(4)(+)-NL(-1)) from a municipal wastewater treatment plant (WWTP). The SBR was operated with three cycles per day, temperature 30 degrees C, SRT 11 days and HRT 1 day. During the operational cycle, three alternating oxic/anoxic periods were performed to avoid alkalinity restrictions. Oxygen supply and working pH range were controlled to achieve the BNR via nitrite, which makes the process more economical. Under steady state conditions, a total nitrogen removal of 0.87 kg N (m(3)day)(-1) was reached. A four-step nitrogen removal model was developed to describe the process. This model enlarges the IWA activated sludge models for a more detailed description of the nitrogen elimination processes and their inhibitions. A closed intermittent-flow respirometer was set up for the estimation of the most relevant model parameters. Once calibrated, model predictions reproduced experimental data accurately.
Kinetically governed polymorphism of d(G₄T₄G₃) quadruplexes in K+ solutions.
Prislan, Iztok; Lah, Jurij; Milanic, Matija; Vesnaver, Gorazd
2011-03-01
It has been generally recognized that understanding the molecular basis of some important cellular processes is hampered by the lack of knowledge of forces that drive spontaneous formation/disruption of G-quadruplex structures in guanine-rich DNA sequences. According to numerous biophysical and structural studies G-quadruplexes may occur in the presence of K(+) and Na(+) ions as polymorphic structures formed in kinetically governed processes. The reported kinetic models suggested to describe this polymorphism should be considered inappropriate since, as a rule, they include bimolecular single-step associations characterized by negative activation energies. In contrast, our approach in studying polymorphic behavior of G-quadruplexes is based on model mechanisms that involve only elementary folding/unfolding transitions and structural conversion steps that are characterized by positive activation energies. Here, we are investigating a complex polymorphism of d(G(4)T(4)G(3)) quadruplexes in K(+) solutions. On the basis of DSC, circular dichroism and UV spectroscopy and polyacrylamide gel electrophoresis experiments we propose a kinetic model that successfully describes the observed thermally induced conformational transitions of d(G(4)T(4)G(3)) quadruplexes in terms of single-step reactions that involve besides single strands also one tetramolecular and three bimolecular quadruplex structures.
NASA Astrophysics Data System (ADS)
Cavanaugh, C.; Gille, J.; Francis, G.; Nardi, B.; Hannigan, J.; McInerney, J.; Krinsky, C.; Barnett, J.; Dean, V.; Craig, C.
2005-12-01
The High Resolution Dynamics Limb Sounder (HIRDLS) instrument onboard the NASA Aura spacecraft experienced a rupture of the thermal blanketing material (Kapton) during the rapid depressurization of launch. The Kapton draped over the HIRDLS scan mirror, severely limiting the aperture through which HIRDLS views space and Earth's atmospheric limb. In order for HIRDLS to achieve its intended measurement goals, rapid characterization of the anomaly, and rapid recovery from it were required. The recovery centered around a new processing module inserted into the standard HIRDLS processing scheme, with a goal of minimizing the effect of the anomaly on the already existing processing modules. We describe the software infrastructure on which the new processing module was built, and how that infrastructure allows for rapid application development and processing response. The scope of the infrastructure spans three distinct anomaly recovery steps and the means for their intercommunication. Each of the three recovery steps (removing the Kapton-induced oscillation in the radiometric signal, removing the Kapton signal contamination upon the radiometric signal, and correcting for the partially-obscured atmospheric view) is completely modularized and insulated from the other steps, allowing focused and rapid application development towards a specific step, and neutralizing unintended inter-step influences, thus greatly shortening the design-development-test lifecycle. The intercommunication is also completely modularized and has a simple interface to which the three recovery steps adhere, allowing easy modification and replacement of specific recovery scenarios, thereby heightening the processing response.
Effective virus inactivation and removal by steps of Biotest Pharmaceuticals IGIV production process
Dichtelmüller, Herbert O.; Flechsig, Eckhard; Sananes, Frank; Kretschmar, Michael; Dougherty, Christopher J.
2012-01-01
The virus validation of three steps of Biotest Pharmaceuticals IGIV production process is described here. The steps validated are precipitation and removal of fraction III of the cold ethanol fractionation process, solvent/detergent treatment and 35 nm virus filtration. Virus validation was performed considering combined worst case conditions. By these validated steps sufficient virus inactivation/removal is achieved, resulting in a virus safe product. PMID:24371563
ERIC Educational Resources Information Center
Stille, J. K.
1981-01-01
Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)
Reduced Order Models for Dynamic Behavior of Elastomer Damping Devices
NASA Astrophysics Data System (ADS)
Morin, B.; Legay, A.; Deü, J.-F.
2016-09-01
In the context of passive damping, various mechanical systems from the space industry use elastomer components (shock absorbers, silent blocks, flexible joints...). The material of these devices has frequency, temperature and amplitude dependent characteristics. The associated numerical models, using viscoelastic and hyperelastic constitutive behaviour, may become computationally too expensive during a design process. The aim of this work is to propose efficient reduced viscoelastic models of rubber devices. The first step is to choose an accurate material model that represent the viscoelasticity. The second step is to reduce the rubber device finite element model to a super-element that keeps the frequency dependence. This reduced model is first built by taking into account the fact that the device's interfaces are much more rigid than the rubber core. To make use of this difference, kinematical constraints enforce the rigid body motion of these interfaces reducing the rubber device model to twelve dofs only on the interfaces (three rotations and three translations per face). Then, the superelement is built by using a component mode synthesis method. As an application, the dynamic behavior of a structure supported by four hourglass shaped rubber devices under harmonic loads is analysed to show the efficiency of the proposed approach.
ERIC Educational Resources Information Center
Koken, Juline A.; Naar-King, Sylvie; Umasa, Sanya; Parsons, Jeffrey T.; Saengcharnchai, Pichai; Phanuphak, Praphan; Rongkavilit, Chokechai
2012-01-01
The provision of culturally relevant yet evidence-based interventions has become crucial to global HIV prevention and treatment efforts. In Thailand, where treatment for HIV has become widely available, medication adherence and risk behaviors remain an issue for Thai youth living with HIV. Previous research on motivational interviewing (MI) has…
Isosaari, Pirjo; Marjavaara, Pieti; Lehmus, Eila
2010-10-15
Removal of Cu, Cr and As from utility poles treated with chromated copper arsenate (CCA) was investigated using different one- to three-step combinations of oxalic acid extraction and electrokinetic treatment. The experiments were carried out at room temperature, using 0.8% oxalic acid and 30 V (200 V/m) of direct current (DC) or alternating current in combination (DC/AC). Six-hour extraction removed only 15%, 11% and 28% and 7-day electrokinetic treatment 57%, 0% and 17% of Cu, Cr and As from wood chips, respectively. The best combination for all the metals was a three-step process consisting of pre-extraction, electrokinetics and post-extraction steps, yielding removals of 67% for Cu, 64% for Cr and 81% for As. Oxalic acid extraction prior to electrokinetic treatment was deleterious to further removal of Cu, but it was necessary for Cr and As removal. Chemical equilibrium modelling was used to explain the differences in the behaviour of Cu, Cr and As. Due to the dissimilar nature of these metals, it appeared that even more process sequences and/or stricter control of the process conditions would be needed to obtain the >99% removals required for safe recycling of the purified wood material. 2010 Elsevier B.V. All rights reserved.
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
Santarelli, M; Barra, S; Sagnelli, F; Zitella, P
2012-11-01
The paper deals with the energy analysis and optimization of a complete biomass-to-electricity energy pathway, starting from raw biomass towards the production of renewable electricity. The first step (biomass-to-biogas) is based on a real pilot plant located in Environment Park S.p.A. (Torino, Italy) with three main steps ((1) impregnation; (2) steam explosion; (3) enzymatic hydrolysis), completed by a two-step anaerobic fermentation. In the second step (biogas-to-electricity), the paper considers two technologies: internal combustion engines and a stack of solid oxide fuel cells. First, the complete pathway has been modeled and validated through experimental data. After, the model has been used for an analysis and optimization of the complete thermo-chemical and biological process, with the objective function of maximization of the energy balance at minimum consumption. The comparison between ICE and SOFC shows the better performance of the integrated plants based on SOFC. Copyright © 2012 Elsevier Ltd. All rights reserved.
Schulze, M; Kuster, C; Schäfer, J; Jung, M; Grossfeld, R
2018-03-01
The processing of ejaculates is a fundamental step for the fertilizing capacity of boar spermatozoa. The aim of the present study was to identify factors that affect quality of boar semen doses. The production process during 1 day of semen processing in 26 European boar studs was monitored. In each boar stud, nine to 19 randomly selected ejaculates from 372 Pietrain boars were analyzed for sperm motility, acrosome and plasma membrane integrity, mitochondrial activity and thermo-resistance (TRT). Each ejaculate was monitored for production time and temperature for each step in semen processing using the special programmed software SEQU (version 1.7, Minitüb, Tiefenbach, Germany). The dilution of ejaculates with a short-term extender was completed in one step in 10 AI centers (n = 135 ejaculates), in two steps in 11 AI centers (n = 158 ejaculates) and in three steps in five AI centers (n = 79 ejaculates). Results indicated there was a greater semen quality with one-step isothermal dilution compared with the multi-step dilution of AI semen doses (total motility TRT d7: 71.1 ± 19.2%, 64.6 ± 20.0%, 47.1 ± 27.1%; one-step compared with two-step compared with the three-step dilution; P < .05). There was a marked advantage when using the one-step isothermal dilution regarding time management, preservation suitability, stability and stress resistance. One-step dilution caused significant lower holding times of raw ejaculates and reduced the possible risk of making mistakes due to a lower number of processing steps. These results lead to refined recommendations for boar semen processing. Copyright © 2018 Elsevier B.V. All rights reserved.
The research on construction and application of machining process knowledge base
NASA Astrophysics Data System (ADS)
Zhao, Tan; Qiao, Lihong; Qie, Yifan; Guo, Kai
2018-03-01
In order to realize the application of knowledge in machining process design, from the perspective of knowledge in the application of computer aided process planning(CAPP), a hierarchical structure of knowledge classification is established according to the characteristics of mechanical engineering field. The expression of machining process knowledge is structured by means of production rules and the object-oriented methods. Three kinds of knowledge base models are constructed according to the representation of machining process knowledge. In this paper, the definition and classification of machining process knowledge, knowledge model, and the application flow of the process design based on the knowledge base are given, and the main steps of the design decision of the machine tool are carried out as an application by using the knowledge base.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Adapting water treatment design and operations to the impacts of global climate change
NASA Astrophysics Data System (ADS)
Clark, Robert M.; Li, Zhiwei; Buchberger, Steven G.
2011-12-01
It is anticipated that global climate change will adversely impact source water quality in many areas of the United States and will therefore, potentially, impact the design and operation of current and future water treatment systems. The USEPA has initiated an effort called the Water Resources Adaptation Program (WRAP) which is intended to develop tools and techniques that can assess the impact of global climate change on urban drinking water and wastewater infrastructure. A three step approach for assessing climate change impacts on water treatment operation and design is being persude in this effort. The first step is the stochastic characterization of source water quality, the second step is the application of the USEPA Water Treatment Plant model and the third step is the application of cost algorithms to provide a metric that can be used to assess the coat impact of climate change. A model has been validated using data collected from Cincinnati's Richard Miller Water Treatment Plant for the USEPA Information Collection Rule (ICR) database. An analysis of the water treatment processes in response to assumed perturbations in raw water quality identified TOC, pH, and bromide as the three most important parameters affecting performance of the Miller WTP. The Miller Plant was simulated using the EPA WTP model to examine the impact of these parameters on selected regulated water quality parameters. Uncertainty in influent water quality was analyzed to estimate the risk of violating drinking water maximum contaminant levels (MCLs).Water quality changes in the Ohio River were projected for 2050 using Monte Carlo simulation and the WTP model was used to evaluate the effects of water quality changes on design and operation. Results indicate that the existing Miller WTP might not meet Safe Drinking Water Act MCL requirements for certain extreme future conditions. However, it was found that the risk of MCL violations under future conditions could be controlled by enhancing existing WTP design and operation or by process retrofitting and modification.
Evaluating the cost effectiveness of environmental projects: Case studies in aerospace and defense
NASA Technical Reports Server (NTRS)
Shunk, James F.
1995-01-01
Using the replacement technology of high pressure waterjet decoating systems as an example, a simple methodology is presented for developing a cost effectiveness model. The model uses a four-step process to formulate an economic justification designed for presentation to decision makers as an assessment of the value of the replacement technology over conventional methods. Three case studies from major U.S. and international airlines are used to illustrate the methodology and resulting model. Tax and depreciation impacts are also presented as potential additions to the model.
Nuclear fusion during yeast mating occurs by a three-step pathway.
Melloy, Patricia; Shen, Shu; White, Erin; McIntosh, J Richard; Rose, Mark D
2007-11-19
In Saccharomyces cerevisiae, mating culminates in nuclear fusion to produce a diploid zygote. Two models for nuclear fusion have been proposed: a one-step model in which the outer and inner nuclear membranes and the spindle pole bodies (SPBs) fuse simultaneously and a three-step model in which the three events occur separately. To differentiate between these models, we used electron tomography and time-lapse light microscopy of early stage wild-type zygotes. We observe two distinct SPBs in approximately 80% of zygotes that contain fused nuclei, whereas we only see fused or partially fused SPBs in zygotes in which the site of nuclear envelope (NE) fusion is already dilated. This demonstrates that SPB fusion occurs after NE fusion. Time-lapse microscopy of zygotes containing fluorescent protein tags that localize to either the NE lumen or the nucleoplasm demonstrates that outer membrane fusion precedes inner membrane fusion. We conclude that nuclear fusion occurs by a three-step pathway.
Huotilainen, Eero; Jaanimets, Risto; Valášek, Jiří; Marcián, Petr; Salmi, Mika; Tuomi, Jukka; Mäkitie, Antti; Wolff, Jan
2014-07-01
The process of fabricating physical medical skull models requires many steps, each of which is a potential source of geometric error. The aim of this study was to demonstrate inaccuracies and differences caused by DICOM to STL conversion in additively manufactured medical skull models. Three different institutes were requested to perform an automatic reconstruction from an identical DICOM data set of a patients undergoing tumour surgery into an STL file format using their software of preference. The acquired digitized STL data sets were assessed and compared and subsequently used to fabricate physical medical skull models. The three fabricated skull models were then scanned, and differences in the model geometries were assessed using established CAD inspection software methods. A large variation was noted in size and anatomical geometries of the three physical skull models fabricated from an identical (or "a single") DICOM data set. A medical skull model of the same individual can vary markedly depending on the DICOM to STL conversion software and the technical parameters used. Clinicians should be aware of this inaccuracy in certain applications. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Quantitative microbiological risk assessment in food industry: Theory and practical application.
Membré, Jeanne-Marie; Boué, Géraldine
2018-04-01
The objective of this article is to bring scientific background as well as practical hints and tips to guide risk assessors and modelers who want to develop a quantitative Microbiological Risk Assessment (MRA) in an industrial context. MRA aims at determining the public health risk associated with biological hazards in a food. Its implementation in industry enables to compare the efficiency of different risk reduction measures, and more precisely different operational settings, by predicting their effect on the final model output. The first stage in MRA is to clearly define the purpose and scope with stakeholders, risk assessors and modelers. Then, a probabilistic model is developed; this includes schematically three important phases. Firstly, the model structure has to be defined, i.e. the connections between different operational processing steps. An important step in food industry is the thermal processing leading to microbial inactivation. Growth of heat-treated surviving microorganisms and/or post-process contamination during storage phase is also important to take into account. Secondly, mathematical equations are determined to estimate the change of microbial load after each processing step. This phase includes the construction of model inputs by collecting data or eliciting experts. Finally, the model outputs are obtained by simulation procedures, they have to be interpreted and communicated to targeted stakeholders. In this latter phase, tools such as what-if scenarios provide an essential added value. These different MRA phases are illustrated through two examples covering important issues in industry. The first one covers process optimization in a food safety context, the second one covers shelf-life determination in a food quality context. Although both contexts required the same methodology, they do not have the same endpoint: up to the human health in the foie gras case-study illustrating here a safety application, up to the food portion in the brioche case-study illustrating here a quality application. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comparison study on mechanical properties single step and three step artificial aging on duralium
NASA Astrophysics Data System (ADS)
Tsamroh, Dewi Izzatus; Puspitasari, Poppy; Andoko, Sasongko, M. Ilman N.; Yazirin, Cepi
2017-09-01
Duralium is kind of non-ferro alloy that used widely in industrial. That caused its properties such as mild, high ductility, and resistance from corrosion. This study aimed to know mechanical properties of duralium on single step and three step articial aging process. Mechanical properties that discussed in this study focused on toughness value, tensile strength, and microstructure of duralium. Toughness value of single step artificial aging was 0.082 joule/mm2, and toughness value of three step artificial aging was 0,0721 joule/mm2. Duralium tensile strength of single step artificial aging was 32.36 kgf/mm^2, and duralium tensile strength of three step artificial aging was 32,70 kgf/mm^2. Based on microstructure photo of duralium of single step artificial aging showed that precipitate (θ) was not spreading evenly indicated by black spot which increasing the toughness of material. While microstructure photo of duralium that treated by three step artificial aging showed that it had more precipitate (θ) spread evenly compared with duralium that treated by single step artificial aging.
Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C.
2015-01-01
Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real world tasks. In this study, we took advantage of existing practice data from five simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by FAA proficiency ratings). We developed a new STEP (Simultaneous Time Effects on Practice) model to: (1) model the simultaneous effects of practice and interval on performance of the five flights, and (2) examine the effects of selected covariates (age, flight expertise, and three composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intra-individual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with either practice or interval. Results indicate that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real world tasks. PMID:26280383
Chapter 10. Developing a habitat monitoring program: three examples from national forest planning
Michael I. Goldstein; Lowell H. Suring; Christina D. Vojta; Mary M. Rowland; Clinton. McCarthy
2013-01-01
This chapter reviews the process steps of wildlife habitat monitoring described in chapters 2 through 9 and provides three case examples that illustrate how the process steps apply to specific situations. It provides the reader an opportunity to synthesize the material while also revealing the potential knowledge gaps and pitfalls that may complicate completion of a...
NASA Astrophysics Data System (ADS)
Neelmeijer, Julia; Motagh, Mahdi; Bookhagen, Bodo
2017-08-01
This study demonstrates the potential of using single-pass TanDEM-X (TDX) radar imagery to analyse inter- and intra-annual glacier changes in mountainous terrain. Based on SAR images acquired in February 2012, March 2013 and November 2013 over the Inylchek Glacier, Kyrgyzstan, we discuss in detail the processing steps required to generate three reliable digital elevation models (DEMs) with a spatial resolution of 10 m that can be used for glacial mass balance studies. We describe the interferometric processing steps and the influence of a priori elevation information that is required to model long-wavelength topographic effects. We also focus on DEM alignment to allow optimal DEM comparisons and on the effects of radar signal penetration on ice and snow surface elevations. We finally compare glacier elevation changes between the three TDX DEMs and the C-band shuttle radar topography mission (SRTM) DEM from February 2000. We introduce a new approach for glacier elevation change calculations that depends on the elevation and slope of the terrain. We highlight the superior quality of the TDX DEMs compared to the SRTM DEM, describe remaining DEM uncertainties and discuss the limitations that arise due to the side-looking nature of the radar sensor.
Lim, Jun Yeul; Lim, Dae Gon; Kim, Ki Hyun; Park, Sang-Koo; Jeong, Seong Hoon
2018-02-01
Effects of annealing steps during the freeze drying process on etanercept, model protein, were evaluated using various analytical methods. The annealing was introduced in three different ways depending on time and temperature. Residual water contents of dried cakes varied from 2.91% to 6.39% and decreased when the annealing step was adopted, suggesting that they are directly affected by the freeze drying methods Moreover, the samples were more homogenous when annealing was adopted. Transition temperatures of the excipients (sucrose, mannitol, and glycine) were dependent on the freeze drying steps. Size exclusion chromatography showed that monomer contents were high when annealing was adopted and also they decreased less after thermal storage at 60°C. Dynamic light scattering results exhibited that annealing can be helpful in inhibiting aggregation and that thermal storage of freeze-dried samples preferably induced fragmentation over aggregation. Shift of circular dichroism spectrum and of the contents of etanercept secondary structure was observed with different freeze drying steps and thermal storage conditions. All analytical results suggest that the physicochemical properties of etanercept formulation can differ in response to different freeze drying steps and that annealing is beneficial for maintaining stability of protein and reducing the time of freeze drying process. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.
2008-01-01
Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…
Quality measurement and benchmarking of HPV vaccination services: a new approach.
Maurici, Massimo; Paulon, Luca; Campolongo, Alessandra; Meleleo, Cristina; Carlino, Cristiana; Giordani, Alessandro; Perrelli, Fabrizio; Sgricia, Stefano; Ferrante, Maurizio; Franco, Elisabetta
2014-01-01
A new measurement process based upon a well-defined mathematical model was applied to evaluate the quality of human papillomavirus (HPV) vaccination centers in 3 of 12 Local Health Units (ASLs) within the Lazio Region of Italy. The quality aspects considered for evaluation were communicational efficiency, organizational efficiency and comfort. The overall maximum achievable value was 86.10%, while the HPV vaccination quality scores for ASL1, ASL2 and ASL3 were 73.07%, 71.08%, and 67.21%, respectively. With this new approach it is possible to represent the probabilistic reasoning of a stakeholder who evaluates the quality of a healthcare provider. All ASLs had margins for improvements and optimal quality results can be assessed in terms of better performance conditions, confirming the relationship between the resulting quality scores and HPV vaccination coverage. The measurement process was structured into three steps and involved four stakeholder categories: doctors, nurses, parents and vaccinated women. In Step 1, questionnaires were administered to collect different stakeholders' points of view (i.e., subjective data) that were elaborated to obtain the best and worst performance conditions when delivering a healthcare service. Step 2 of the process involved the gathering of performance data during the service delivery (i.e., objective data collection). Step 3 of the process involved the elaboration of all data: subjective data from step 1 are used to define a "standard" to test objective data from step 2. This entire process led to the creation of a set of scorecards. Benchmarking is presented as a result of the probabilistic meaning of the evaluated scores.
Process service quality evaluation based on Dempster-Shafer theory and support vector machine.
Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei
2017-01-01
Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
Identification of cortex in magnetic resonance images
NASA Astrophysics Data System (ADS)
VanMeter, John W.; Sandon, Peter A.
1992-06-01
The overall goal of the work described here is to make available to the neurosurgeon in the operating room an on-line, three-dimensional, anatomically labeled model of the patient brain, based on pre-operative magnetic resonance (MR) images. A stereotactic operating microscope is currently in experimental use, which allows structures that have been manually identified in MR images to be made available on-line. We have been working to enhance this system by combining image processing techniques applied to the MR data with an anatomically labeled 3-D brain model developed from the Talairach and Tournoux atlas. Here we describe the process of identifying cerebral cortex in the patient MR images. MR images of brain tissue are reasonably well described by material mixture models, which identify each pixel as corresponding to one of a small number of materials, or as being a composite of two materials. Our classification algorithm consists of three steps. First, we apply hierarchical, adaptive grayscale adjustments to correct for nonlinearities in the MR sensor. The goal of this preprocessing step, based on the material mixture model, is to make the grayscale distribution of each tissue type constant across the entire image. Next, we perform an initial classification of all tissue types according to gray level. We have used a sum of Gaussian's approximation of the histogram to perform this classification. Finally, we identify pixels corresponding to cortex, by taking into account the spatial patterns characteristic of this tissue. For this purpose, we use a set of matched filters to identify image locations having the appropriate configuration of gray matter (cortex), cerebrospinal fluid and white matter, as determined by the previous classification step.
Collaborative partnership in age-friendly cities: two case studies from Quebec, Canada.
Garon, Suzanne; Paris, Mario; Beaulieu, Marie; Veil, Anne; Laliberté, Andréanne
2014-01-01
This article aims to explain the collaborative partnership conditions and factors that foster implementation effectiveness within the age-friendly cities (AFC) in Quebec (AFC-QC), Canada. Based on a community-building approach that emphasizes collaborative partnership, the AFC-QC implementation process is divided into three steps: (1) social diagnostic of older adults' needs; (2) an action plan based on a logic model; and (3) implementation through collaborations. AFC-QC promotes direct involvement of older adults and seniors' associations at each of the three steps of the implementation process, as well as other stakeholders in the community. Based on two contrasting case studies, this article illustrates the importance of collaborative partnership for the success of AFC implementation. Results show that stakeholders, agencies, and organizations are exposed to a new form of governance where coordination and collaborative partnership among members of the steering committee are essential. Furthermore, despite the importance of the senior associations' participation in the process, they encountered significant limits in the capacity of implementing age-friendly environments solely by themselves. In conclusion, we identify the main collaborative partnership conditions and factors in AFC-QC.
ERIC Educational Resources Information Center
Haghani, Nader; Kiani, Samira
2018-01-01
The concept of text-oriented vocabulary exercises is based on Kühn's (2000) three-step model of vocabulary teaching--receptive, reflective and productive vocabulary exercises--which focuses on working with texts. Since the production is in principle more exhausting than the reception--as can be seen from the Levels of Processing Effect--one can…
ERIC Educational Resources Information Center
Braune, Rolf; Foshay, Wellesley R.
1983-01-01
The proposed three-step strategy for research on human information processing--concept hierarchy analysis, analysis of example sets to teach relations among concepts, and analysis of problem sets to build a progressively larger schema for the problem space--may lead to practical procedures for instructional design and task analysis. Sixty-four…
NASA Technical Reports Server (NTRS)
McGinness, Kathleen E.; Wright, Martin C.; Joyce, Gerald F.
2002-01-01
Variants of the class I ligase ribozyme, which catalyzes joining of the 3' end of a template bound oligonucleotide to its own 5' end, have been made to evolve in a continuous manner by a simple serial transfer procedure that can be carried out indefinitely. This process was expanded to allow the evolution of ribozymes that catalyze three successive nucleotidyl addition reactions, two template-directed mononucleotide additions followed by RNA ligation. During the development of this behavior, a population of ribozymes was maintained against an overall dilution of more than 10(exp 406). The resulting ribozymes were capable of catalyzing the three-step reaction pathway, with nucleotide addition occurring in either a 5' yieldig 3' or a 3' yielding 5' direction. This purely chemical system provides a functional model of a multi-step reaction pathway that is undergoing Darwinian evolution.
Stochastic modelling of animal movement.
Smouse, Peter E; Focardi, Stefano; Moorcroft, Paul R; Kie, John G; Forester, James D; Morales, Juan M
2010-07-27
Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal 'settling down', accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry.
Gait parameter and event estimation using smartphones.
Pepa, Lucia; Verdini, Federica; Spalazzi, Luca
2017-09-01
The use of smartphones can greatly help for gait parameters estimation during daily living, but its accuracy needs a deeper evaluation against a gold standard. The objective of the paper is a step-by-step assessment of smartphone performance in heel strike, step count, step period, and step length estimation. The influence of smartphone placement and orientation on estimation performance is evaluated as well. This work relies on a smartphone app developed to acquire, process, and store inertial sensor data and rotation matrices about device position. Smartphone alignment was evaluated by expressing the acceleration vector in three reference frames. Two smartphone placements were tested. Three methods for heel strike detection were considered. On the basis of estimated heel strikes, step count is performed, step period is obtained, and the inverted pendulum model is applied for step length estimation. Pearson correlation coefficient, absolute and relative errors, ANOVA, and Bland-Altman limits of agreement were used to compare smartphone estimation with stereophotogrammetry on eleven healthy subjects. High correlations were found between smartphone and stereophotogrammetric measures: up to 0.93 for step count, to 0.99 for heel strike, 0.96 for step period, and 0.92 for step length. Error ranges are comparable to those in the literature. Smartphone placement did not affect the performance. The major influence of acceleration reference frames and heel strike detection method was found in step count. This study provides detailed information about expected accuracy when smartphone is used as a gait monitoring tool. The obtained results encourage real life applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Vandenbosch, Laura; Eggermont, Steven
2015-04-01
This longitudinal study (N = 730) explored whether the three-step process of self-objectification (internalization of appearance ideals, valuing appearance over competence, and body surveillance) could explain the influence of sexual media messages on adolescents' sexual behaviors. A structural equation model showed that reading sexualizing magazines (Time 1) was related to the internalization of appearance ideals and valuing appearance over competence (Time 2). In turn, the internalization of appearance ideals was positively associated with body surveillance and valuing appearance over competence (all at Time 2). Valuing appearance over competence was also positively associated with body surveillance (all at Time 2). Lastly, body surveillance (Time 2) positively related to the initiation of French kissing (Time 3) whereas valuing appearance over competence (Time 2) positively related to the initiation of sexual intercourse (Time 3). No significant relationship was observed for intimate touching. The discussion focused on the explanatory role of self-objectification in media effects on adolescents' sexual behaviors.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Elsawah, S.; Pierce, S. A.; Ames, D. P.
2016-12-01
The National Socio-Environmental Synthesis Center (SESYNC) Core Modelling Practices Pursuit is developing resources to describe core practices for developing and using models to support integrated water resource management. These practices implement specific steps in the modelling process with an interdisciplinary perspective; however, the particular practice that is most appropriate depends on contextual aspects specific to the project. The first task of the pursuit is to identify the various steps for which implementation practices are to be described. This paper reports on those results. The paper draws on knowledge from the modelling process literature for environmental modelling (Jakeman et al., 2006), engaging stakeholders (Voinov and Bousquet, 2010) and general modelling (Banks, 1999), as well as the experience of the consortium members. We organise the steps around the four modelling phases. The planning phase identifies what is to be achieved, how and with what resources. The model is built and tested during the construction phase, and then used in the application phase. Finally, models that become part of the ongoing policy process require a maintenance phase. For each step, the paper focusses on what is to be considered or achieved, rather than how it is performed. This reflects the separation of the steps from the practices that implement them in different contexts. We support description of steps with a wide range of examples. Examples are designed to be generic and do not reflect any one project or context, but instead are drawn from common situations or from extremely different ones so as to highlight some of the issues that may arise at each step. References Banks, J. (1999). Introduction to simulation. In Proceedings of the 1999 Winter Simulation Conference. Jakeman, A. J., R. A. Letcher, and J. P. Norton (2006). Ten iterative steps in development and evaluation of environmental models. Environmental Modelling and Software 21, 602-614. Voinov, A. and F. Bousquet (2010). Modelling with stakeholders. Environmental Modelling & Software 25 (11), 1268-1281.
A Three-Step Atomic Layer Deposition Process for SiN x Using Si2Cl6, CH3NH2, and N2 Plasma.
Ovanesyan, Rafaiel A; Hausmann, Dennis M; Agarwal, Sumit
2018-06-06
We report a novel three-step SiN x atomic layer deposition (ALD) process using Si 2 Cl 6 , CH 3 NH 2 , and N 2 plasma. In a two-step process, nonhydrogenated chlorosilanes such as Si 2 Cl 6 with N 2 plasmas lead to poor-quality SiN x films that oxidize rapidly. The intermediate CH 3 NH 2 step was therefore introduced in the ALD cycle to replace the NH 3 plasma step with a N 2 plasma, while using Si 2 Cl 6 as the Si precursor. This three-step process lowers the atomic H content and improves the film conformality on high-aspect-ratio nanostructures as Si-N-Si bonds are formed during a thermal CH 3 NH 2 step in addition to the N 2 plasma step. During ALD, the reactive surface sites were monitored using in situ surface infrared spectroscopy. Our infrared spectra show that, on the post-N 2 plasma-treated SiN x surface, Si 2 Cl 6 reacts primarily with the surface -NH 2 species to form surface -SiCl x ( x = 1, 2, or 3) bonds, which are the reactive sites during the CH 3 NH 2 cycle. In the N 2 plasma step, reactive -NH 2 surface species are created because of the surface H available from the -CH 3 groups. At 400 °C, the SiN x films have a growth per cycle of ∼0.9 Å with ∼12 atomic percent H. The films grown on high-aspect-ratio nanostructures have a conformality of ∼90%.
A New Insight into the Mechanism of NADH Model Oxidation by Metal Ions in Non-Alkaline Media.
Yang, Jin-Dong; Chen, Bao-Long; Zhu, Xiao-Qing
2018-06-11
For a long time, it has been controversial that the three-step (e-H+-e) or two-step (e-H•) mechanism was used for the oxidations of NADH and its models by metal ions in non-alkaline media. The latter mechanism has been accepted by the majority of researchers. In this work, 1-benzyl-1,4-dihydronicotinamide (BNAH) and 1-phenyl-l,4-dihydronicotinamide (PNAH) are used as NADH models, and ferrocenium (Fc+) metal ion as an electron acceptor. The kinetics for oxidations of the NADH models by Fc+ in pure acetonitrile were monitored by using UV-Vis absorption and quadratic relationship between of kobs and the concentrations of NADH models were found for the first time. The rate expression of the reactions developed according to the three-step mechanism is quite consistent with the quadratic curves. The rate constants, thermodynamic driving forces and KIEs of each elementary step for the reactions were estimated. All the results supported the three-step mechanism. The intrinsic kinetic barriers of the proton transfer from BNAH+• to BNAH and the hydrogen atom transfer from BNAH+• to BNAH+• were estimated, the results showed that the former is 11.8 kcal/mol, and the latter is larger than 24.3 kcal/mol. It is the large intrinsic kinetic barrier of the hydrogen atom transfer that makes the reactions choose the three-step rather than two-step mechanism. Further investigation of the factors affecting the intrinsic kinetic barrier of chemical reactions indicated that the large intrinsic kinetic barrier of the hydrogen atom transfer originated from the repulsion of positive charges between BNAH+• and BNAH+•. The greatest contribution of this work is the discovery of the quadratic dependence of kobs on the concentrations of the NADH models, which is inconsistent with the conventional viewpoint of the "two-step mechanism" on the oxidations of NADH and its models by metal ions in the non-alkaline media.
Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?
NASA Astrophysics Data System (ADS)
Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2018-01-01
Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.
Cloud, Richard N; Kingree, J B
2008-01-01
Researchers have observed that a majority of addicted persons who are encouraged and facilitated by treatment providers to attend twelve-step (TS) programs either drop out or sporadically use twelve-step programs following treatment. This is troubling given considerable evidence of TS program benefits associated with regular weekly attendance and ubiquitous reliance by treatment professionals on these programs to provide important support services. This chapter reviews and advances theory of TS utilization and dose that is supported by prior research, multivariate models, and scales that predict risk of TS meeting underutilization. Advancing theory should organize and clarify the process of initial utilization, guide intervention development, and improve adherence of TS program referrals, all of which should lead to improved treatment planning and better outcomes. Three theories are integrated to explain processes that may influence TS program dose: the health belief model, self-determination theory (motivational theory), and a person-in-organization cultural fit theory. Four multidimensional scales developed specifically to predict participation are described. Implications for practice and future research are considered in a final discussion. Information contained in this chapter raises awareness of the need for TS-focused treatments to focus on achieving weekly attendance during and after treatment.
Self-narrative reconstruction in emotion-focused therapy: A preliminary task analysis.
Cunha, Carla; Mendes, Inês; Ribeiro, António P; Angus, Lynne; Greenberg, Leslie S; Gonçalves, Miguel M
2017-11-01
This research explored the consolidation phase of emotion-focused therapy (EFT) for depression and studies-through a task-analysis method-how client-therapist dyads evolved from the exploration of the problem to self-narrative reconstruction. Innovative moments (IMs) were used to situate the process of self-narrative reconstruction within sessions, particularly through reconceptualization and performing change IMs. We contrasted the observation of these occurrences with a rational model of self-narrative reconstruction, previously built. This study presents the rational model and the revised rational-empirical model of the self-narrative reconstruction task in three EFT dyads, suggesting nine steps necessary for task resolution: (1) Explicit recognition of differences in the present and steps in the path of change; (2) Development of a meta-perspective contrast between present self and past self; (3) Amplification of contrast in the self; (4) A positive appreciation of changes is conveyed; (5) Occurrence of feelings of empowerment, competence, and mastery; (6) Reference to difficulties still present; (7) Emphasis on the loss of centrality of the problem; (8) Perception of change as a gradual, developing process; and (9) Reference to projects, experiences of change, or elaboration of new plans. Central aspects of therapist activity in facilitating the client's progression along these nine steps are also elaborated.
NASA Astrophysics Data System (ADS)
Roedig, Edna; Cuntz, Matthias; Huth, Andreas
2015-04-01
The effects of climatic inter-annual fluctuations and human activities on the global carbon cycle are uncertain and currently a major issue in global vegetation models. Individual-based forest gap models, on the other hand, model vegetation structure and dynamics on a small spatial (<100 ha) and large temporal scale (>1000 years). They are well-established tools to reproduce successions of highly-diverse forest ecosystems and investigate disturbances as logging or fire events. However, the parameterizations of the relationships between short-term climate variability and forest model processes are often uncertain in these models (e.g. daily variable temperature and gross primary production (GPP)) and cannot be constrained from forest inventories. We addressed this uncertainty and linked high-resolution Eddy-covariance (EC) data with an individual-based forest gap model. The forest model FORMIND was applied to three diverse tropical forest sites in the Amazonian rainforest. Species diversity was categorized into three plant functional types. The parametrizations for the steady-state of biomass and forest structure were calibrated and validated with different forest inventories. The parameterizations of relationships between short-term climate variability and forest model processes were evaluated with EC-data on a daily time step. The validations of the steady-state showed that the forest model could reproduce biomass and forest structures from forest inventories. The daily estimations of carbon fluxes showed that the forest model reproduces GPP as observed by the EC-method. Daily fluctuations of GPP were clearly reflected as a response to daily climate variability. Ecosystem respiration remains a challenge on a daily time step due to a simplified soil respiration approach. In the long-term, however, the dynamic forest model is expected to estimate carbon budgets for highly-diverse tropical forests where EC-measurements are rare.
Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba
2017-01-01
The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.
Analysis and optimization of dynamic model of eccentric shaft grinder
NASA Astrophysics Data System (ADS)
Gao, Yangjie; Han, Qiushi; Li, Qiguang; Peng, Baoying
2018-04-01
Eccentric shaft servo grinder is the core equipment in the process chain of machining eccentric shaft. The establishment of the movement model and the determination of the kinematic relation of the-axis in the grinding process directly affect the quality of the grinding process, and there are many error factors in grinding, and it is very important to analyze the influence of these factors on the work piece quality. The three-dimensional model of eccentric shaft grinder is drawn by Pro/E three-dimensional drawing software, the model is imported into ANSYS Workbench Finite element analysis software, and the finite element analysis is carried out, and then the variation and parameters of each component of the bed are obtained by the modal analysis result. The natural frequencies and formations of the first six steps of the eccentric shaft grinder are obtained by modal analysis, and the weak links of the parts of the grinder are found out, and a reference improvement method is proposed for the design of the eccentric shaft grinder in the future.
NASA Astrophysics Data System (ADS)
Wagemans, Johan
2017-07-01
Matthew Pelowski and his colleagues from the Helmut Leder lab [17] have made a remarkable contribution to the field of art perception by reviewing the extensive and varied literature (+300 references) on all the factors involved, from a coherent, synthetic perspective-The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP). VIMAP builds on earlier attempts from the same group to provide a comprehensive theoretical framework, but it is much wider in scope and richer in the number of levels and topics covered under its umbrella. It is particularly strong in its discussion of the different psychological processes that lead to a wide range of possible responses to art-from mundane, superficial reactions to more profound responses characterized as moving, disturbing, and transformative. By including physiological, emotional, and evaluative factors, the model is able to address truly unique, even intimate responses to art such as awe, chills, thrills, and the experience of the sublime. The unique way in which this rich set of possible responses to art is achieved is through a series of five mandatory consecutive processing steps (each with their own typical duration), followed by two conditional additional steps (which take more time). Three processing checks along this cascade lead to three more or less spontaneous outcomes (<60 sec) and two more time-consuming ones (see their Fig. 1 for an excellent overview). I have no doubt that VIMAP will inspire a whole generation of scientists investigating perception and appreciation of art, testing specific hypotheses derived from this framework for decades to come.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
NASA Astrophysics Data System (ADS)
Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui
2013-04-01
A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.
Downscaling scheme to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, Annika; Venema, Victor; Lindau, Ralf; Ament, Felix; Simmer, Clemens
2010-05-01
The earth's surface is characterized by heterogeneity at a broad range of scales. Weather forecast models and climate models are not able to resolve this heterogeneity at the smaller scales. Many processes in the soil or at the surface, however, are highly nonlinear. This holds, for example, for evaporation processes, where stomata or aerodynamic resistances are nonlinear functions of the local micro-climate. Other examples are threshold dependent processes, e.g., the generation of runoff or the melting of snow. It has been shown that using averaged parameters in the computation of these processes leads to errors and especially biases, due to the involved nonlinearities. Thus it is necessary to account for the sub-grid scale surface heterogeneities in atmospheric modeling. One approach to take the variability of the earth's surface into account is the mosaic approach. Here the soil-vegetation-atmosphere transfer (SVAT) model is run on an explicit higher resolution than the atmospheric part of a coupled model, which is feasible due to generally lower computational costs of a SVAT model compared to the atmospheric part. The question arises how to deal with the scale differences at the interface between the two resolutions. Usually the assumption of a homogeneous forcing for all sub-pixels is made. However, over a heterogeneous surface, usually the boundary layer is also heterogeneous. Thus, by assuming a constant atmospheric forcing again biases in the turbulent heat fluxes may occur due to neglected atmospheric forcing variability. Therefore we have developed and tested a downscaling scheme to disaggregate the atmospheric variables of the lower atmosphere that are used as input to force a SVAT model. Our downscaling scheme consists of three steps: 1) a bi-quadratic spline interpolation of the coarse-resolution field; 2) a "deterministic" part, where relationships between surface and near-surface variables are exploited; and 3) a noise-generation step, in which the still missing, not explained, variance is added as noise. The scheme has been developed and tested based on high-resolution (400 m) model output of the weather forecast (and regional climate) COSMO model. Downscaling steps 1 and 2 reduce the error made by the homogeneous assumption considerably, whereas the third step leads to close agreement of the sub-grid scale variance with the reference. This is, however, achieved at the cost of higher root mean square errors. Thus, before applying the downscaling system to atmospheric data a decision should be made whether the lowest possible errors (apply only downscaling step 1 and 2) or a most realistic sub-grid scale variability (apply also step 3) is desired. This downscaling scheme is currently being implemented into the COSMO model, where it will be used in combination with the mosaic approach. However, this downscaling scheme can also be applied to drive stand-alone SVAT models or hydrological models, which usually also need high-resolution atmospheric forcing data.
Fox Valley Technical College Quality First Process Model.
ERIC Educational Resources Information Center
Fox Valley Technical Coll., Appleton, WI.
An overview is provided of the Quality First Process Model developed by Fox Valley Technical College (FVTC), Wisconsin, to provide guidelines for quality instruction and service consistent with the highest educational standards. The 16-step model involves activities that should be adaptable to any organization. The steps of the quality model are…
NASA Astrophysics Data System (ADS)
Jiménez Jaramillo, M. A.; Camacho Botero, L. A.; Vélez Upegui, J. I.
2010-12-01
Variation in stream morphology along a basin drainage network leads to different hydraulic patterns and sediment transport processes. Moreover, solute transport processes along streams, and stream habitats for fisheries and microorganisms, rely on stream corridor structure, including elements such as bed forms, channel patterns, riparian vegetation, and the floodplain. In this work solute transport processes simulation and stream habitat identification are carried out at the basin scale. A reach-scale morphological classification system based on channel slope and specific stream power was implemented by using digital elevation models and hydraulic geometry relationships. Although the morphological framework allows identification of cascade, step-pool, plane bed and pool-riffle morphologies along the drainage network, it still does not account for floodplain configuration and bed-forms identification of those channel types. Hence, as a first application case in order to obtain parsimonious three-dimensional characterizations of drainage channels, the morphological framework has been updated by including topographical floodplain delimitation through a Multi-resolution Valley Bottom Flatness Index assessing, and a stochastic bed form representation of the step-pool morphology. Model outcomes were tested in relation to in-stream water storage for different flow conditions and representative travel times according to the Aggregated Dead Zone -ADZ- model conceptualization of solute transport processes.
The DAB model of drawing processes
NASA Technical Reports Server (NTRS)
Hochhaus, Larry W.
1989-01-01
The problem of automatic drawing was investigated in two ways. First, a DAB model of drawing processes was introduced. DAB stands for three types of knowledge hypothesized to support drawing abilities, namely, Drawing Knowledge, Assimilated Knowledge, and Base Knowledge. Speculation concerning the content and character of each of these subsystems of the drawing process is introduced and the overall adequacy of the model is evaluated. Second, eight experts were each asked to understand six engineering drawings and to think aloud while doing so. It is anticipated that a concurrent protocol analysis of these interviews can be carried out in the future. Meanwhile, a general description of the videotape database is provided. In conclusion, the DAB model was praised as a worthwhile first step toward solution of a difficult problem, but was considered by and large inadequate to the challenge of automatic drawing. Suggestions for improvements on the model were made.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
van Limburg, Maarten; Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-08-13
It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed "basic stakeholder analysis," stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology.
NASA Astrophysics Data System (ADS)
Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel
2016-10-01
Advanced technology nodes, 10 nm and beyond, employing multipatterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. A self-aligned quadruple patterning (SAQP) process is used to create the fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bears the compounding effects from successive reactive ion etch and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes, which work on an assumption that there is consistent spacing between fins. In SAQP, there are three pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology, such as transmission electron microscopy. We will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.
CR-100 synthetic zeolite adsorption characteristics toward Northern Banat groundwater ammonia.
Tomić, Željko; Kukučka, Miroslav; Stojanović, Nikoleta Kukučka; Kukučka, Andrej; Jokić, Aleksandar
2016-10-14
The adsorption characteristics of synthetic zeolite CR-100 in a fixed-bed system using continuous flow of groundwater containing elevated ammonia concentration were examined. The possibilities for adsorbent mass calculation throughout mass transfer zone using novel mathematical approach as well as zeolite adsorption capacity at every sampling point in time or effluent volume were determined. The investigated adsorption process consisted of three clearly separated steps indicated to sorption kinetics. The first step was characterized by decrease and small changes in effluent ammonia concentration vs. experiment time and quantity of adsorbed ammonia per mass unit of zeolite. The consequences of this phenomenon were showed in the plots of the Freundlich and the Langmuir isotherm models through a better linear correlation according as graphical points contingent to the first step were not accounted. The Temkin and the Dubinin-Radushkevich isotherm models showed the opposite tendency with better fitting for overall measurements. According to the obtained isotherms parameter data, the investigated process was found to be multilayer physicochemical adsorption, and also that synthetic zeolite CR-100 is a promising material for removal of ammonia from Northern Banat groundwater with an ammonia removal efficiency of 90%.
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
TRUST84. Sat-Unsat Flow in Deformable Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.
1984-11-01
TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less
Income Smoothing: Methodology and Models.
1986-05-01
studies have all followed a similar research process (Figure 1). All were expost studies and included the following steps: 1. A smoothing technique(s) or...researcher methodological decisions used in past empirical studies of income smoothing (design type, smoothing device norm, and income target) are discussed...behavior. The identification of smoothing, and consequently the conclusions to be drawn from smoothing studies , is found to be sensitive to the three
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
Han, Yaohui; Mou, Lan; Xu, Gengchi; Yang, Yiqiang; Ge, Zhenlin
2015-03-01
To construct a three-dimensional finite element model comparing between one-step and two-step methods in torque control of anterior teeth during space closure. Dicom image data including maxilla and upper teeth were obtained though cone-beam CT. A three-dimensional model was set up and the maxilla, upper teeth and periodontium were separated using Mimics software. The models were instantiated using Pro/Engineer software, and Abaqus finite element analysis software was used to simulate the sliding mechanics by loading 1.47 Nforce on traction hooks with different heights (2, 4, 6, 8, 10, 12 and 14 mm, respectively) in order to compare the initial displacement between six maxillary anterior teeth (one-step method) and four maxillary anterior teeth (two-step method). When moving anterior teeth bodily, initial displacements of central incisors in two-step method and in one-step method were 29.26 × 10⁻⁶ mm and 15.75 × 10⁻⁶ mm, respectively. The initial displacements of lateral incisors in two-step method and in one-step method were 46.76 × 10(-6) mm and 23.18 × 10(-6) mm, respectively. Under the same amount of light force, the initial displacement of anterior teeth in two-step method was doubled compared with that in one-step method. The root and crown of the canine couldn't obtain the same amount of displacement in one-step method. Two-step method could produce more initial displacement than one-step method. Therefore, two-step method was easier to achieve torque control of the anterior teeth during space closure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teixeira, F; Universidade do Estado do Rio de Janeiro, Rio De Janeiro, RJ; Almeida, C de
2015-06-15
Purpose: The goal of the present work was to evaluate the process maps for stereotactic radiosurgery (SRS) treatment at three radiotherapy centers in Brazil and apply the FMEA technique to evaluate similarities and differences, if any, of the hazards and risks associated with these processes. Methods: A team, consisting of professionals from different disciplines and involved in the SRS treatment, was formed at each center. Each team was responsible for the development of the process map, and performance of FMEA and FTA. A facilitator knowledgeable in these techniques led the work at each center. The TG100 recommended scales were usedmore » for the evaluation of hazard and severity for each step for the major process “treatment planning”. Results: Hazard index given by the Risk Priority Number (RPN) is found to range from 4–270 for various processes and the severity (S) index is found to range from 1–10. The RPN values > 100 and severity value ≥ 7 were chosen to flag safety improvement interventions. Number of steps with RPN ≥100 were found to be 6, 59 and 45 for the three centers. The corresponding values for S ≥ 7 are 24, 21 and 25 respectively. The range of RPN and S values for each center belong to different process steps and failure modes. Conclusion: These results show that interventions to improve safety is different for each center and it is associated with the skill level of the professional team as well as the technology used to provide radiosurgery treatment. The present study will very likely be a model for implementation of risk-based prospective quality management program for SRS treatment in Brazil where currently there are 28 radiotherapy centers performing SRS. A complete FMEA for SRS for these three radiotherapy centers is currently under development.« less
NASA Astrophysics Data System (ADS)
Li, Yuan; Chen, Xuejiang; Su, Juan
2017-06-01
A three-dimensional kinetic Monte Carlo (KMC) model has been developed to study the step instability caused by nucleation during the step-flow growth of 3C-SiC. In the model, a lattice mesh was established to fix the position of atoms and bond partners based on the crystal lattice of 3C-SiC. The events considered in the model were adsorption and diffusion of adatoms on the terraces, attachment, detachment and interlayer transport of adatoms at the step edges, and nucleation of adatoms. Then the effects of nucleation on the instability of step meandering and the coalescence of both islands and steps were simulated by the model. The results showed that the instability of step meandering caused by nucleation was affected by the growth temperature. And the effects of nucleation on the instability was also analyzed. Moreover, the surface roughness as a function of time for different temperatures was discussed. Finally, a phase diagram was presented to predict in which conditions the effects of nucleation on step meandering become significant and the three different regimes, the step-flow (SF), 2D nucleation (2DN), and 3D layer by layer (3DLBL) were determined.
A moving hum filter to suppress rotor noise in high-resolution airborne magnetic data
Xia, J.; Doll, W.E.; Miller, R.D.; Gamey, T.J.; Emond, A.M.
2005-01-01
A unique filtering approach is developed to eliminate helicopter rotor noise. It is designed to suppress harmonic noise from a rotor that varies slightly in amplitude, phase, and frequency and that contaminates aero-magnetic data. The filter provides a powerful harmonic noise-suppression tool for data acquired with modern large-dynamic-range recording systems. This three-step approach - polynomial fitting, bandpass filtering, and rotor-noise synthesis - significantly reduces rotor noise without altering the spectra of signals of interest. Two steps before hum filtering - polynomial fitting and bandpass filtering - are critical to accurately model the weak rotor noise. During rotor-noise synthesis, amplitude, phase, and frequency are determined. Data are processed segment by segment so that there is no limit on the length of data. The segment length changes dynamically along a line based on modeling results. Modeling the rotor noise is stable and efficient. Real-world data examples demonstrate that this method can suppress rotor noise by more than 95% when implemented in an aeromagnetic data-processing flow. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
RFID in the blood supply chain--increasing productivity, quality and patient safety.
Briggs, Lynne; Davis, Rodeina; Gutierrez, Alfonso; Kopetsky, Matthew; Young, Kassandra; Veeramani, Raj
2009-01-01
As part of an overall design of a new, standardized RFID-enabled blood transfusion medicine supply chain, an assessment was conducted for two hospitals: the University of Iowa Hospital and Clinics (UIHC) and Mississippi Baptist Health System (MBHS). The main objectives of the study were to assess RFID technological and economic feasibility, along with possible impacts to productivity, quality and patient safety. A step-by-step process analysis focused on the factors contributing to process "pain points" (errors, inefficiency, product losses). A process re-engineering exercise produced blueprints of RFID-enabled processes to alleviate or eliminate those pain-points. In addition, an innovative model quantifying the potential reduction in adverse patient effects as a result of RFID implementation was created, allowing improvement initiatives to focus on process areas with the greatest potential impact to patient safety. The study concluded that it is feasible to implement RFID-enabled processes, with tangible improvements to productivity and safety expected. Based on a comprehensive cost/benefit model, it is estimated for a large hospital (UIHC) to recover investment from implementation within two to three years, while smaller hospitals may need longer to realize ROI. More importantly, the study estimated that RFID technology could reduce morbidity and mortality effects substantially among patients receiving transfusions.
NASA Astrophysics Data System (ADS)
Gillet, Jean-Numa; Degorce, Jean-Yves; Belisle, Jonathan; Meunier, Michel
2004-03-01
Three-dimensional modeling of n^+-ν -n^+ and p^+-π -p^+ semiconducting devices for analog ULSI microelectronics Jean-Numa Gillet,^a,b Jean-Yves Degorce,^a Jonathan Bélisle^a and Michel Meunier.^a,c ^a École Polytechnique de Montréal, Dept. of Engineering Physics, CP 6079, Succ. Centre-vile, Montréal, Québec H3C 3A7, Canada. ^b Corresponding author. Email: Jean-Numa.Gillet@polymtl.ca ^c Also with LTRIM Technologies, 140-440, boul. A.-Frappier, Laval, Québec H7V 4B4, Canada. We present for the first time three-dimensional (3-D) modeling of n^+-ν -n^+ and p^+-π -p^+ semiconducting resistors, which are fabricated by laser-induced doping in a gateless MOSFET and present significant applications for analog ULSI microelectronics. Our modeling software is made up of three steps. The two first concerns modeling of a new laser-trimming fabrication process. With the molten-silicon temperature distribution obtained from the first, we compute in the second the 3-D dopant distribution, which creates the electrical link through the device gap. In this paper the emphasis is on the third step, which concerns 3-D modeling of the resistor electronic behavior with a new tube multiplexing algorithm (TMA). The device current-voltage (I-V) curve is usually obtained by solving three coupled partial differential equations with a finite-element method. A 3-D device as our resistor cannot be modeled with this classical method owing to its prohibitive computational cost in three dimensions. This problem is however avoided by our TMA, which divides the 3-D device into one-dimensional (1-D) multiplexed tubes. In our TMA 1-D systems of three ordinary differential equations are solved to determine the 3-D device I-V curve, which substantially increases computation speed compared with the classical method. Numerical results show a good agreement with experiments.
On the Development of a Hospital-Patient Web-Based Communication Tool: A Case Study From Norway.
Granja, Conceição; Dyb, Kari; Bolle, Stein Roald; Hartvigsen, Gunnar
2015-01-01
Surgery cancellations are undesirable in hospital settings as they increase costs, reduce productivity and efficiency, and directly affect the patient. The problem of elective surgery cancellations in a North Norwegian University Hospital is addressed. Based on a three-step methodology conducted at the hospital, the preoperative planning process was modeled taking into consideration the narratives from different health professions. From the analysis of the generated process models, it is concluded that in order to develop a useful patient centered web-based communication tool, it is necessary to fully understand how hospitals plan and organize surgeries today. Moreover, process reengineering is required to generate a standard process that can serve as a tool for health ICT designers to define the requirements for a robust and useful system.
NASA Astrophysics Data System (ADS)
Ruiz Pérez, Guiomar; Latron, Jérôme; Llorens, Pilar; Gallart, Francesc; Francés, Félix
2017-04-01
Selecting an adequate hydrological model is the first step to carry out a rainfall-runoff modelling exercise. A hydrological model is a hypothesis of catchment functioning, encompassing a description of dominant hydrological processes and predicting how these processes interact to produce the catchment's response to external forcing. Current research lines emphasize the importance of multiple working hypotheses for hydrological modelling instead of only using a single model. In line with this philosophy, here different hypotheses were considered and analysed to simulate the nonlinear response of a small Mediterranean catchment and to progress in the analysis of its hydrological behaviour. In particular, three hydrological models were considered representing different potential hypotheses: two lumped models called LU3 and LU4, and one distributed model called TETIS. To determine how well each specific model performed and to assess whether a model was more adequate than another, we raised three complementary tests: one based on the analysis of residual errors series, another based on a sensitivity analysis and the last one based on using multiple evaluation criteria associated to the concept of Pareto frontier. This modelling approach, based on multiple working hypotheses, helped to improve our perceptual model of the catchment behaviour and, furthermore, could be used as a guidance to improve the performance of other environmental models.
Modeling of the HiPco process for carbon nanotube production. I. Chemical kinetics
NASA Technical Reports Server (NTRS)
Dateo, Christopher E.; Gokcen, Tahir; Meyyappan, M.
2002-01-01
A chemical kinetic model is developed to help understand and optimize the production of single-walled carbon nanotubes via the high-pressure carbon monoxide (HiPco) process, which employs iron pentacarbonyl as the catalyst precursor and carbon monoxide as the carbon feedstock. The model separates the HiPco process into three steps, precursor decomposition, catalyst growth and evaporation, and carbon nanotube production resulting from the catalyst-enhanced disproportionation of carbon monoxide, known as the Boudouard reaction: 2 CO(g)-->C(s) + CO2(g). The resulting detailed model contains 971 species and 1948 chemical reactions. A second model with a reduced reaction set containing 14 species and 22 chemical reactions is developed on the basis of the detailed model and reproduces the chemistry of the major species. Results showing the parametric dependence of temperature, total pressure, and initial precursor partial pressures are presented, with comparison between the two models. The reduced model is more amenable to coupled reacting flow-field simulations, presented in the following article.
The Impact of ARM on Climate Modeling. Chapter 26
NASA Technical Reports Server (NTRS)
Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.
2016-01-01
Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.
HIA, the next step: Defining models and roles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putters, Kim
If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo
2000-01-01
A treatment process for a hydrogen-containing off-gas stream from a refinery, petrochemical plant or the like. The process includes three separation steps: condensation, membrane separation and hydrocarbon fraction separation. The membrane separation step is characterized in that it is carried out under conditions at which the membrane exhibits a selectivity in favor of methane over hydrogen of at least about 2.5.
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
Cheng, Yougan; Othmer, Hans
2016-01-01
Chemotaxis is a dynamic cellular process, comprised of direction sensing, polarization and locomotion, that leads to the directed movement of eukaryotic cells along extracellular gradients. As a primary step in the response of an individual cell to a spatial stimulus, direction sensing has attracted numerous theoretical treatments aimed at explaining experimental observations in a variety of cell types. Here we propose a new model of direction sensing based on experiments using Dictyostelium discoideum (Dicty). The model is built around a reaction-diffusion-translocation system that involves three main component processes: a signal detection step based on G-protein-coupled receptors (GPCR) for cyclic AMP (cAMP), a transduction step based on a heterotrimetic G protein Gα2βγ, and an activation step of a monomeric G-protein Ras. The model can predict the experimentally-observed response of cells treated with latrunculin A, which removes feedback from downstream processes, under a variety of stimulus protocols. We show that Gα2βγ cycling modulated by Ric8, a nonreceptor guanine exchange factor for Gα2 in Dicty, drives multiple phases of Ras activation and leads to direction sensing and signal amplification in cAMP gradients. The model predicts that both Gα2 and Gβγ are essential for direction sensing, in that membrane-localized Gα2*, the activated GTP-bearing form of Gα2, leads to asymmetrical recruitment of RasGEF and Ric8, while globally-diffusing Gβγ mediates their activation. We show that the predicted response at the level of Ras activation encodes sufficient ‘memory’ to eliminate the ‘back-of-the wave’ problem, and the effects of diffusion and cell shape on direction sensing are also investigated. In contrast with existing LEGI models of chemotaxis, the results do not require a disparity between the diffusion coefficients of the Ras activator GEF and the Ras inhibitor GAP. Since the signal pathways we study are highly conserved between Dicty and mammalian leukocytes, the model can serve as a generic one for direction sensing. PMID:27152956
Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock
2017-01-01
Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.
Force transients and minimum cross-bridge models in muscular contraction
Halvorson, Herbert R.
2010-01-01
Two- and three-state cross-bridge models are considered and examined with respect to their ability to predict three distinct phases of the force transients that occur in response to step change in muscle fiber length. Particular attention is paid to satisfying the Le Châtelier–Brown Principle. This analysis shows that the two-state model can account for phases 1 and 2 of a force transient, but is barely adequate to account for phase 3 (delayed force) unless a stretch results in a sudden increase in the number of cross-bridges in the detached state. The three-state model (A → B → C → A) makes it possible to account for all three phases if we assume that the A → B transition is fast (corresponding to phase 2), the B → C transition is of intermediate speed (corresponding to phase 3), and the C → A transition is slow; in such a scenario, states A and C can support or generate force (high force states) but state B cannot (detached, or low-force state). This model involves at least one ratchet mechanism. In this model, force can be generated by either of two transitions: B → A or B → C. To determine which of these is the major force-generating step that consumes ATP and transduces energy, we examine the effects of ATP, ADP, and phosphate (Pi) on force transients. In doing so, we demonstrate that the fast transition (phase 2) is associated with the nucleotide-binding step, and that the intermediate-speed transition (phase 3) is associated with the Pi-release step. To account for all the effects of ligands, it is necessary to expand the three-state model into a six-state model that includes three ligand-bound states. The slowest phase of a force transient (phase 4) cannot be explained by any of the models described unless an additional mechanism is introduced. Here we suggest a role of series compliance to account for this phase, and propose a model that correlates the slowest step of the cross-bridge cycle (transition C → A) to: phase 4 of step analysis, the rate constant ktr of the quick-release and restretch experiment, and the rate constant kact for force development time course following Ca2+ activation. PMID:18425593
Force transients and minimum cross-bridge models in muscular contraction.
Kawai, Masataka; Halvorson, Herbert R
2007-01-01
Two- and three-state cross-bridge models are considered and examined with respect to their ability to predict three distinct phases of the force transients that occur in response to step change in muscle fiber length. Particular attention is paid to satisfying the Le Châtelier-Brown Principle. This analysis shows that the two-state model can account for phases 1 and 2 of a force transient, but is barely adequate to account for phase 3 (delayed force) unless a stretch results in a sudden increase in the number of cross-bridges in the detached state. The three-state model (A-->B-->C-->A) makes it possible to account for all three phases if we assume that the A-->B transition is fast (corresponding to phase 2), the B-->A transition is of intermediate speed (corresponding to phase 3), and the C-->A transition is slow; in such a scenario, states A and C can support or generate force (high force states) but state B cannot (detached, or low-force state). This model involves at least one ratchet mechanism. In this model, force can be generated by either of two transitions: B-->A or B-->C. To determine which of these is the major force-generating step that consumes ATP and transduces energy, we examine the effects of ATP, ADP, and phosphate (Pi) on force transients. In doing so, we demonstrate that the fast transition (phase 2) is associated with the nucleotide-binding step, and that the intermediate-speed transition (phase 3) is associated with the Pi-release step. To account for all the effects of ligands, it is necessary to expand the three-state model into a six-state model that includes three ligand-bound states. The slowest phase of a force transient (phase 4) cannot be explained by any of the models described unless an additional mechanism is introduced. Here we suggest a role of series compliance to account for this phase, and propose a model that correlates the slowest step of the cross-bridge cycle (transition C-->A) to: phase 4 of step analysis, the rate constant k(tr) of the quick-release and restretch experiment, and the rate constant k(act) for force development time course following Ca(2+) activation.
Fields of Tension in a Boundary-Crossing World: Towards a Democratic Organization of the Self.
Hermans, Hubert J M; Konopka, Agnieszka; Oosterwegel, Annerieke; Zomer, Peter
2017-12-01
In their study of the relationship between self and society, scientists have proposed taking society as a metaphor for understanding the dynamics of the self, such as the analogy between the self and the functioning of a totalitarian state or the analogy between the self and the functioning of a bureaucratic organization. In addition to these models, the present article proposes a democratic society as a metaphor for understanding the workings of a dialogical self in a globalizing, boundary-crossing world. The article follows four steps. In the first step the self is depicted as extended to the social and societal environment and made up of fields of tension in which a multiplicity of self-positions are involved in processes of positioning and counter-positioning and in relationships of social power. In the second step, the fertility of the democratic metaphor is demonstrated by referring to theory and research from three identity perspectives: multicultural, multiracial, and transgender. In the fields of tension emerging between the multiplicity of self-positions, new, hybrid, and mixed identities have a chance to emerge as adaptive responses to the limitations of existing societal structures. In the third step, we place the democratic self in a broader societal context by linking three levels of inclusiveness, proposed by Self-Categorization Theory (personal, social, and human) to recent conceptions of a cosmopolitan democracy. In the fourth and final step, a model is presented which allows the formulation of a series of specific research questions for future studies of a democratically organized self.
The tale of hearts and reason: the influence of mood on decision making.
Laborde, Sylvain; Raab, Markus
2013-08-01
In decision-making research, one important aspect of real-life decisions has so far been neglected: the mood of the decision maker when generating options. The authors tested the use of the take-the-first (TTF) heuristic and extended the TTF model to understand how mood influences the option-generation process of individuals in two studies, the first using a between-subjects design (30 nonexperts, 30 near-experts, and 30 experts) and the second conceptually replicating the first using a within-subject design (30 nonexperts). Participants took part in an experimental option-generation task, with 31 three-dimensional videos of choices in team handball. Three moods were elicited: positive, neutral, and negative. The findings (a) replicate previous results concerning TTF and (b) show that the option-generation process was associated with the physiological component of mood, supporting the neurovisceral integration model. The extension of TTF to processing emotional factors is an important step forward in explaining fast choices in real-life situations.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
NASA Astrophysics Data System (ADS)
Grzenia, B. J.; Jones, C. E.; Tycner, C.; Sigut, T. A. A.
2016-11-01
The B-emission stars 48 Per (HD 25940, HR 1273) and ψ Per (HD 22192, HR 1087) share similar stellar parameters with their disks viewed near pole-on in the case of 48 Per, and near edge-on for ψ Per. An extensive set of high-quality interferometric observations were obtained for both stars between 2006 and 2011 with the Navy Precision Optical Interferometer (NPOI) in the Hα emitting region. Using a three-step modelling process, model visibilities are compared to observations with a view toward achieving better constraints on the disk models than were possible with previous studies.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
Indicator Systems and Evaluation
NASA Technical Reports Server (NTRS)
Canright, Shelley; Grabowski, Barbara
1995-01-01
Participants in the workshop session were actively engaged in a hands-on, minds-on approach to learning about indicators and evaluation processes. The six hour session was broken down into three two hour sessions. Each session was built upon an instructional model which moved from general understanding to specific IITA application. Examples and practice exercises served to demonstrate tand reinforce the workshop concepts. Each successive session built upon the previous session and addressed the major steps in the evaluation process. The major steps covered in the workshop included: project descriptions, writing goals and objectives for categories, determining indicators and indicator systems for specific projects, and methods and issues of data collection. The workshop served as a baseline upon which the field centers will build during the summer in undertaking a comprehensive examination and evaluation of their existing K-12 education projects.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.
Hu, Xiangang; Mu, Li; Zhou, Qixing; Wen, Jianping; Pawliszyn, Janusz
2011-06-01
Aptamers are a new class of single-stranded DNA/RNA molecules selected from synthetic nucleic acid libraries for molecular recognition. Our group reports a novel aptamer column for the removal of trace (ng/L) pharmaceuticals in drinking water. In this study, cocaine and diclofenac were chosen as model molecules to test the aptamer column which presented high removal capacity, selectivity, and stability. The removal of pharmaceuticals was as high as 88-95%. The data of adsorption were fitted with Langmuir isotherm and a pseudo-second-order kinetic model. A thermodynamic experiment proved the adsorption processes were exothermic in spontaneity. The kinetics of aptamer was composed of three steps: activation, binding, and hybridization. The first step was the rate-controlling step. The adsorption system was divided into three parts: kinetic, mixed, and thermodynamic zones from 0% to 100% binding fraction of aptamer. Furthermore, the aptamer column was reusable and achieved strong removal efficiency from 4 to 30 °C at normal cation ion concentration (5-100 mg/L) for multipollutants without cross effects and secondary pollution. This work indicates that aptamer, as a new sorbent, can be used in the removal of persistent organic pollutants, biological toxins, and pathogenic bacteria from surface, drinking, and ground water.
Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-01-01
Background It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. Objective This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. Methods We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed “basic stakeholder analysis,” stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Results Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. Conclusions The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology. PMID:26272510
Zhang, Zhechun; Goldtzvik, Yonathan; Thirumalai, D
2017-11-14
Kinesin walks processively on microtubules (MTs) in an asymmetric hand-over-hand manner consuming one ATP molecule per 16-nm step. The individual contributions due to docking of the approximately 13-residue neck linker to the leading head (deemed to be the power stroke) and diffusion of the trailing head (TH) that contributes in propelling the motor by 16 nm have not been quantified. We use molecular simulations by creating a coarse-grained model of the MT-kinesin complex, which reproduces the measured stall force as well as the force required to dislodge the motor head from the MT, to show that nearly three-quarters of the step occurs by bidirectional stochastic motion of the TH. However, docking of the neck linker to the leading head constrains the extent of diffusion and minimizes the probability that kinesin takes side steps, implying that both the events are necessary in the motility of kinesin and for the maintenance of processivity. Surprisingly, we find that during a single step, the TH stochastically hops multiple times between the geometrically accessible neighboring sites on the MT before forming a stable interaction with the target binding site with correct orientation between the motor head and the [Formula: see text] tubulin dimer.
Data processing has major impact on the outcome of quantitative label-free LC-MS analysis.
Chawade, Aakash; Sandin, Marianne; Teleman, Johan; Malmström, Johan; Levander, Fredrik
2015-02-06
High-throughput multiplexed protein quantification using mass spectrometry is steadily increasing in popularity, with the two major techniques being data-dependent acquisition (DDA) and targeted acquisition using selected reaction monitoring (SRM). However, both techniques involve extensive data processing, which can be performed by a multitude of different software solutions. Analysis of quantitative LC-MS/MS data is mainly performed in three major steps: processing of raw data, normalization, and statistical analysis. To evaluate the impact of data processing steps, we developed two new benchmark data sets, one each for DDA and SRM, with samples consisting of a long-range dilution series of synthetic peptides spiked in a total cell protein digest. The generated data were processed by eight different software workflows and three postprocessing steps. The results show that the choice of the raw data processing software and the postprocessing steps play an important role in the final outcome. Also, the linear dynamic range of the DDA data could be extended by an order of magnitude through feature alignment and a charge state merging algorithm proposed here. Furthermore, the benchmark data sets are made publicly available for further benchmarking and software developments.
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
A Three-Step Synthesis of Benzoyl Peroxide
ERIC Educational Resources Information Center
Her, Brenda; Jones, Alexandra; Wollack, James W.
2014-01-01
Benzoyl peroxide is used as a bleaching agent for flour and whey processing, a polymerization initiator in the synthesis of plastics, and the active component of acne medication. Because of its simplicity and wide application, benzoyl peroxide is a target molecule of interest. It can be affordably synthesized in three steps from bromobenzene using…
Development of a career coaching model for medical students.
Hur, Yera
2016-03-01
Deciding on a future career path or choosing a career specialty is an important academic decision for medical students. The purpose of this study is to develop a career coaching model for medical students. This research was carried out in three steps. The first step was systematic review of previous studies. The second step was a need assessment of medical students. The third step was a career coaching model using the results acquired from the researched literature and the survey. The career coaching stages were defined as three big phases: The career coaching stages were defined as the "crystallization" period (Pre-medical year 1 and 2), "specification" period (medical year 1 and 2), and "implementation" period (medical year 3 and 4). The career coaching model for medical students can be used in programming career coaching contents and also in identifying the outcomes of career coaching programs at an institutional level.
Hattersley, J G; Pérez-Velázquez, J; Chappell, M J; Bearup, D; Roper, D; Dowson, C; Bugg, T; Evans, N D
2011-11-01
An important question in Systems Biology is the design of experiments that enable discrimination between two (or more) competing chemical pathway models or biological mechanisms. In this paper analysis is performed between two different models describing the kinetic mechanism of a three-substrate three-product reaction, namely the MurC reaction in the cytoplasmic phase of peptidoglycan biosynthesis. One model involves ordered substrate binding and ordered release of the three products; the competing model also assumes ordered substrate binding, but with fast release of the three products. The two versions are shown to be distinguishable; however, if standard quasi-steady-state assumptions are made distinguishability cannot be determined. Once model structure uniqueness is ensured the experimenter must determine if it is possible to successfully recover rate constant values given the experiment observations, a process known as structural identifiability. Structural identifiability analysis is carried out for both models to determine which of the unknown reaction parameters can be determined uniquely, or otherwise, from the ideal system outputs. This structural analysis forms an integrated step towards the modelling of the full pathway of the cytoplasmic phase of peptidoglycan biosynthesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Strategies for developing competency models.
Marrelli, Anne F; Tondora, Janis; Hoge, Michael A
2005-01-01
There is an emerging trend within healthcare to introduce competency-based approaches in the training, assessment, and development of the workforce. The trend is evident in various disciplines and specialty areas within the field of behavioral health. This article is designed to inform those efforts by presenting a step-by-step process for developing a competency model. An introductory overview of competencies, competency models, and the legal implications of competency development is followed by a description of the seven steps involved in creating a competency model for a specific function, role, or position. This modeling process is drawn from advanced work on competencies in business and industry.
van de Pol, M H J; Fluit, C R M G; Lagro, J; Lagro-Janssen, A L M; Olde Rikkert, M G M
2017-01-01
To develop a model for shared decision-making with frail older patients. Online Delphi forum. We used a three-round Delphi technique to reach consensus on the structure of a model for shared decision-making with older patients. The expert panel consisted of 16 patients (round 1), and 59 professionals (rounds 1-3). In round 1, the panel of experts was asked about important steps in the process of shared decision-making and the draft model was introduced. Rounds 2 and 3 were used to adapt the model and test it for 'importance' and 'feasibility'. Consensus for the dynamic shared decision-making model as a whole was achieved for both importance (91% panel agreement) and feasibility (76% panel agreement). Shared decision-making with older patients is a dynamic process. It requires a continuous supportive dialogue between health care professional and patient.
Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus
2018-06-01
Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.
Supercritical fluid extraction. Principles and practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, M.A.; Krukonis, V.J.
This book is a presentation of the fundamentals and application of super-critical fluid solvents (SCF). The authors cover virtually every facet of SCF technology: the history of SCF extraction, its underlying thermodynamic principles, process principles, industrial applications, and analysis of SCF research and development efforts. The thermodynamic principles governing SCF extraction are covered in depth. The often complex three-dimensional pressure-temperature composition (PTx) phase diagrams for SCF-solute mixtures are constructed in a coherent step-by-step manner using the more familiar two-dimensional Px diagrams. The experimental techniques used to obtain high pressure phase behavior information are described in detail and the advantages andmore » disadvantages of each technique are explained. Finally, the equations used to model SCF-solute mixtures are developed, and modeling results are presented to highlight the correlational strengths of a cubic equation of state.« less
2017-01-01
Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing. PMID:29023597
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
Initial Crisis Reaction and Poliheuristic Theory
ERIC Educational Resources Information Center
DeRouen, Karl, Jr.; Sprecher, Christopher
2004-01-01
Poliheuristic (PH) theory models foreign policy decisions using a two-stage process. The first step eliminates alternatives on the basis of a simplifying heuristic. The second step involves a selection from among the remaining alternatives and can employ a more rational and compensatory means of processing information. The PH model posits that…
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Hauffe, T.; Albrecht, C.; Wilke, T.
2015-09-01
The Balkan Lake Ohrid is the oldest and most speciose freshwater lacustrine system in Europe. However, it remains unclear whether the diversification of its endemic taxa is mainly driven by neutral processes, environmental factors, or species interactions. This calls for a holistic perspective involving both evolutionary processes and ecological dynamics. Such a unifying framework - the metacommunity speciation model - considers how community assembly affects diversification and vice versa by assessing the relative contribution of the three main community assembly processes, dispersal limitation, environmental filtering, and species interaction. The current study therefore used the species-rich model taxon Gastropoda to assess how extant communities in Lake Ohrid are structured by performing process based metacommunity analyses. Specifically, the study aimed at (i) identifying the relative importance of the three community assembly processes and (ii) to test whether the importance of these individual processes changes gradually with lake depth or whether they are distinctively related to eco-zones. Based on specific simulation steps for each of the three processes, it could be demonstrated that dispersal limitation had the strongest influence on gastropod community structures in Lake Ohrid. However, it was not the exclusive assembly process but acted together with the other two processes - environmental filtering, and species interaction. In fact, the relative importance of the three community assembly processes varied both with lake depth and eco-zones, though the processes were better predicted by the latter. The study thus corroborated the high importance of dispersal limitation for both maintaining species richness in Lake Ohrid (through its impact on community structure) and generating endemic biodiversity (via its influence on diversification processes). However, according to the metacommunity speciation model, the inferred importance of environmental filtering and biotic interaction also suggests a small but significant influence of ecological speciation. These findings contribute to the main goal of the SCOPSCO initiative - inferring the drivers of biotic evolution - and might provide an integrative perspective on biological and limnological dynamics in ancient Lake Ohrid.
Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak
2015-07-01
This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Billoir, Elise; Denis, Jean-Baptiste; Cammeau, Natalie; Cornu, Marie; Zuliani, Veronique
2011-02-01
To assess the impact of the manufacturing process on the fate of Listeria monocytogenes, we built a generic probabilistic model intended to simulate the successive steps in the process. Contamination evolution was modeled in the appropriate units (breasts, dice, and then packaging units through the successive steps in the process). To calibrate the model, parameter values were estimated from industrial data, from the literature, and based on expert opinion. By means of simulations, the model was explored using a baseline calibration and alternative scenarios, in order to assess the impact of changes in the process and of accidental events. The results are reported as contamination distributions and as the probability that the product will be acceptable with regards to the European regulatory safety criterion. Our results are consistent with data provided by industrial partners and highlight that tumbling is a key step for the distribution of the contamination at the end of the process. Process chain models could provide an important added value for risk assessment models that basically consider only the outputs of the process in their risk mitigation strategies. Moreover, a model calibrated to correspond to a specific plant could be used to optimize surveillance. © 2010 Society for Risk Analysis.
Fassbender, Alex G.
1995-01-01
The invention greatly reduces the amount of ammonia in sewage plant effluent. The process of the invention has three main steps. The first step is dewatering without first digesting, thereby producing a first ammonia-containing stream having a low concentration of ammonia, and a second solids-containing stream. The second step is sending the second solids-containing stream through a means for separating the solids from the liquid and producing an aqueous stream containing a high concentration of ammonia. The third step is removal of ammonia from the aqueous stream using a hydrothermal process.
A Unified Model of Cloud-to-Ground Lightning Stroke
NASA Astrophysics Data System (ADS)
Nag, A.; Rakov, V. A.
2014-12-01
The first stroke in a cloud-to-ground lightning discharge is thought to follow (or be initiated by) the preliminary breakdown process which often produces a train of relatively large microsecond-scale electric field pulses. This process is poorly understood and rarely modeled. Each lightning stroke is composed of a downward leader process and an upward return-stroke process, which are usually modeled separately. We present a unified engineering model for computing the electric field produced by a sequence of preliminary breakdown, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively-charged channel extends downward in a stepped fashion through the relatively-high-field region between the main negative and lower positive charge centers and then through the relatively-low-field region below the lower positive charge center. A relatively-high-field region is also assumed to exist near ground. The preliminary breakdown pulse train is assumed to be generated when the negatively-charged channel interacts with the lower positive charge region. At each step, an equivalent current source is activated at the lower extremity of the channel, resulting in a step current wave that propagates upward along the channel. The leader deposits net negative charge onto the channel. Once the stepped leader attaches to ground (upward connecting leader is presently neglected), an upward-propagating return stroke is initiated, which neutralizes the charge deposited by the leader along the channel. We examine the effect of various model parameters, such as step length and current propagation speed, on model-predicted electric fields. We also compare the computed fields with pertinent measurements available in the literature.
Probabilistic exposure assessment model to estimate aseptic-UHT product failure rate.
Pujol, Laure; Albert, Isabelle; Magras, Catherine; Johnson, Nicholas Brian; Membré, Jeanne-Marie
2015-01-02
Aseptic-Ultra-High-Temperature (UHT) products are manufactured to be free of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during manufacture, distribution and storage. Two important phases within the process are widely recognised as critical in controlling microbial contamination: the sterilisation steps and the following aseptic steps. Of the microbial hazards, the pathogen spore formers Clostridium botulinum and Bacillus cereus are deemed the most pertinent to be controlled. In addition, due to a relatively high thermal resistance, Geobacillus stearothermophilus spores are considered a concern for spoilage of low acid aseptic-UHT products. A probabilistic exposure assessment model has been developed in order to assess the aseptic-UHT product failure rate associated with these three bacteria. It was a Modular Process Risk Model, based on nine modules. They described: i) the microbial contamination introduced by the raw materials, either from the product (i.e. milk, cocoa and dextrose powders and water) or the packaging (i.e. bottle and sealing component), ii) the sterilisation processes, of either the product or the packaging material, iii) the possible recontamination during subsequent processing of both product and packaging. The Sterility Failure Rate (SFR) was defined as the sum of bottles contaminated for each batch, divided by the total number of bottles produced per process line run (10(6) batches simulated per process line). The SFR associated with the three bacteria was estimated at the last step of the process (i.e. after Module 9) but also after each module, allowing for the identification of modules, and responsible contamination pathways, with higher or lower intermediate SFR. The model contained 42 controlled settings associated with factory environment, process line or product formulation, and more than 55 probabilistic inputs corresponding to inputs with variability conditional to a mean uncertainty. It was developed in @Risk and run through Monte Carlo simulations. Overall, the highest SFR was associated with G. stearothermophilus (380000 bottles contaminated in 10(11) bottles produced) and the lowest to C. botulinum (3 bottles contaminated in 10(11) bottles produced). Unsurprisingly, SFR due to G. stearothermophilus was due to its ability to survive the UHT treatment. More interestingly, it was identified that SFR due to B. cereus (17000 bottles contaminated in 10(11) bottles produced) was due to an airborne recontamination of the aseptic tank (49%) and a post-sterilisation packaging contamination (33%). A deeper analysis (sensitivity and scenario analyses) was done to investigate how the SFR due to B. cereus could be reduced by changing the process settings related to potential air recontamination source. Copyright © 2014 Elsevier B.V. All rights reserved.
Global phenomena from local rules: Peer-to-peer networks and crystal steps
NASA Astrophysics Data System (ADS)
Finkbiner, Amy
Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.
Training for Template Creation: A Performance Improvement Method
ERIC Educational Resources Information Center
Lyons, Paul
2008-01-01
Purpose: There are three purposes to this article: first, to offer a training approach to employee learning and performance improvement that makes use of a step-by-step process of skill/knowledge creation. The process offers follow-up opportunities for skill maintenance and improvement; second, to explain the conceptual bases of the approach; and…
How To Build a Strategic Plan: A Step-by-Step Guide for School Managers.
ERIC Educational Resources Information Center
Clay, Katherine; And Others
Strategic planning techniques for administrators, with a focus on process managers, are presented in this guidebook. The three major tasks of the strategic planning process include the assessment of the current organizational situation, goal setting, and the development of strategies to accomplish this. Strategic planning differs from long-range…
A Virtual Environment for Process Management. A Step by Step Implementation
ERIC Educational Resources Information Center
Mayer, Sergio Valenzuela
2003-01-01
In this paper it is presented a virtual organizational environment, conceived with the integration of three computer programs: a manufacturing simulation package, an automation of businesses processes (workflows), and business intelligence (Balanced Scorecard) software. It was created as a supporting tool for teaching IE, its purpose is to give…
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.
2018-03-01
The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.
Fining of Red Wine Monitored by Multiple Light Scattering.
Ferrentino, Giovanna; Ramezani, Mohsen; Morozova, Ksenia; Hafner, Daniela; Pedri, Ulrich; Pixner, Konrad; Scampicchio, Matteo
2017-07-12
This work describes a new approach based on multiple light scattering to study red wine clarification processes. The whole spectral signal (1933 backscattering points along the length of each sample vial) were fitted by a multivariate kinetic model that was built with a three-step mechanism, implying (1) adsorption of wine colloids to fining agents, (2) aggregation into larger particles, and (3) sedimentation. Each step is characterized by a reaction rate constant. According to the first reaction, the results showed that gelatin was the most efficient fining agent, concerning the main objective, which was the clarification of the wine, and consequently the increase in its limpidity. Such a trend was also discussed in relation to the results achieved by nephelometry, total phenols, ζ-potential, color, sensory, and electronic nose analyses. Also, higher concentrations of the fining agent (from 5 to 30 g/100 L) or higher temperatures (from 10 to 20 °C) sped up the process. Finally, the advantage of using the whole spectral signal vs classical univariate approaches was demonstrated by comparing the uncertainty associated with the rate constants of the proposed kinetic model. Overall, multiple light scattering technique showed a great potential for studying fining processes compared to classical univariate approaches.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
High-Quality 3d Models and Their Use in a Cultural Heritage Conservation Project
NASA Astrophysics Data System (ADS)
Tucci, G.; Bonora, V.; Conti, A.; Fiorini, L.
2017-08-01
Cultural heritage digitization and 3D modelling processes are mainly based on laser scanning and digital photogrammetry techniques to produce complete, detailed and photorealistic three-dimensional surveys: geometric as well as chromatic aspects, in turn testimony of materials, work techniques, state of preservation, etc., are documented using digitization processes. The paper explores the topic of 3D documentation for conservation purposes; it analyses how geomatics contributes in different steps of a restoration process and it presents an overview of different uses of 3D models for the conservation and enhancement of the cultural heritage. The paper reports on the project to digitize the earthenware frieze of the Ospedale del Ceppo in Pistoia (Italy) for 3D documentation, restoration work support, and digital and physical reconstruction and integration purposes. The intent to design an exhibition area suggests new ways to take advantage of 3D data originally acquired for documentation and scientific purposes.
Review of current GPS methodologies for producing accurate time series and their error sources
NASA Astrophysics Data System (ADS)
He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping
2017-05-01
The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e.g., subsidence of the highway bridge) to the detection of particular geophysical signals.
Calibration process of highly parameterized semi-distributed hydrological model
NASA Astrophysics Data System (ADS)
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.
40 CFR 93.104 - Frequency of conformity determinations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in the project's design concept and scope; three years elapse since the most recent major step to.... Major steps include NEPA process completion; start of final design; acquisition of a significant portion...
Xu, Jeff S; Huang, Jiwei; Qin, Ruogu; Hinkle, George H; Povoski, Stephen P; Martin, Edward W; Xu, Ronald X
2010-03-01
Accurate assessment of tumor boundaries and recognition of occult disease are important oncologic principles in cancer surgeries. However, existing imaging modalities are not optimized for intraoperative cancer imaging applications. We developed a nanobubble (NB) contrast agent for cancer targeting and dual-mode imaging using optical and ultrasound (US) modalities. The contrast agent was fabricated by encapsulating the Texas Red dye in poly (lactic-co-glycolic acid) (PLGA) NBs and conjugating NBs with cancer-targeting ligands. Both one-step and three-step cancer-targeting strategies were tested on the LS174T human colon cancer cell line. For the one-step process, NBs were conjugated with the humanized HuCC49 Delta C(H)2 antibody to target the over-expressed TAG-72 antigen. For the three-step process, cancer cells were targeted by successive application of the biotinylated HuCC49 Delta C(H)2 antibody, streptavidin, and the biotinylated NBs. Both one-step and three-step processes successfully targeted the cancer cells with high binding affinity. NB-assisted dual-mode imaging was demonstrated on a gelatin phantom that embedded multiple tumor simulators at different NB concentrations. Simultaneous fluorescence and US images were acquired for these tumor simulators and linear correlations were observed between the fluorescence/US intensities and the NB concentrations. Our research demonstrated the technical feasibility of using the dual-mode NB contrast agent for cancer targeting and simultaneous fluorescence/US imaging. (c) 2009 Elsevier Ltd. All rights reserved.
Trainer, Asa; Hedberg, Thomas; Feeney, Allison Barnard; Fischer, Kevin; Rosche, Phil
2016-01-01
Advances in information technology triggered a digital revolution that holds promise of reduced costs, improved productivity, and higher quality. To ride this wave of innovation, manufacturing enterprises are changing how product definitions are communicated - from paper to models. To achieve industry's vision of the Model-Based Enterprise (MBE), the MBE strategy must include model-based data interoperability from design to manufacturing and quality in the supply chain. The Model-Based Definition (MBD) is created by the original equipment manufacturer (OEM) using Computer-Aided Design (CAD) tools. This information is then shared with the supplier so that they can manufacture and inspect the physical parts. Today, suppliers predominantly use Computer-Aided Manufacturing (CAM) and Coordinate Measuring Machine (CMM) models for these tasks. Traditionally, the OEM has provided design data to the supplier in the form of two-dimensional (2D) drawings, but may also include a three-dimensional (3D)-shape-geometry model, often in a standards-based format such as ISO 10303-203:2011 (STEP AP203). The supplier then creates the respective CAM and CMM models and machine programs to produce and inspect the parts. In the MBE vision for model-based data exchange, the CAD model must include product-and-manufacturing information (PMI) in addition to the shape geometry. Today's CAD tools can generate models with embedded PMI. And, with the emergence of STEP AP242, a standards-based model with embedded PMI can now be shared downstream. The on-going research detailed in this paper seeks to investigate three concepts. First, that the ability to utilize a STEP AP242 model with embedded PMI for CAD-to-CAM and CAD-to-CMM data exchange is possible and valuable to the overall goal of a more efficient process. Second, the research identifies gaps in tools, standards, and processes that inhibit industry's ability to cost-effectively achieve model-based-data interoperability in the pursuit of the MBE vision. Finally, it also seeks to explore the interaction between CAD and CMM processes and determine if the concept of feedback from CAM and CMM back to CAD is feasible. The main goal of our study is to test the hypothesis that model-based-data interoperability from CAD-to-CAM and CAD-to-CMM is feasible through standards-based integration. This paper presents several barriers to model-based-data interoperability. Overall, the project team demonstrated the exchange of product definition data between CAD, CAM, and CMM systems using standards-based methods. While gaps in standards coverage were identified, the gaps should not stop industry's progress toward MBE. The results of our study provide evidence in support of an open-standards method to model-based-data interoperability, which would provide maximum value and impact to industry.
Groene, Oliver; Brandt, Elimer; Schmidt, Werner; Moeller, Johannes
2009-08-01
Strategy development and implementation in acute care settings is often restricted by competing challenges, the pace of policy reform and the existence of parallel hierarchies. To describe a generic approach to strategy development, illustrate the use of the Balanced Scorecard as a tool to facilitate strategy implementation and demonstrate how to break down strategic goals into measurable elements. Multi-method approach using three different conceptual models: Health Promoting Hospitals Standards and Strategies, the European Foundation for Quality Management (EFQM) Model and the Balanced Scorecard. A bundle of qualitative and quantitative methods were used including in-depth interviews, standardized organization-wide surveys on organizational values, staff satisfaction and patient experience. Three acute care hospitals in four different locations belonging to a German holding group. Chief executive officer, senior medical officers, working group leaders and hospital staff. Development and implementation of the Balanced Scorecard. Twenty strategic objectives with corresponding Balanced Scorecard measures. A stepped approach from strategy development to implementation is presented to identify key themes for strategy development, drafting a strategy map and developing strategic objectives and measures. The Balanced Scorecard, in combination with the EFQM model, is a useful tool to guide strategy development and implementation in health care organizations. As for other quality improvement and management tools not specifically developed for health care organizations, some adaptations are required to improve acceptability among professionals. The step-wise approach of strategy development and implementation presented here may support similar processes in comparable organizations.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
NASA Astrophysics Data System (ADS)
Swaczyna, Paweł; Bzowski, Maciej; Kubiak, Marzena A.; Sokół, Justyna M.; Fuselier, Stephen A.; Galli, André; Heirtzler, David; Kucharek, Harald; McComas, David J.; Möbius, Eberhard; Schwadron, Nathan A.; Wurz, P.
2018-02-01
Direct-sampling observations of interstellar neutral (ISN) He by the Interstellar Boundary Explorer (IBEX) provide valuable insight into the physical state of and processes operating in the interstellar medium ahead of the heliosphere. The ISN He atom signals are observed at the four lowest ESA steps of the IBEX-Lo sensor. The observed signal is a mixture of the primary and secondary components of ISN He and H. Previously, only data from one of the ESA steps have been used. Here, we extend the analysis to data collected in the three lowest ESA steps with the strongest ISN He signal, for the observation seasons 2009–2015. The instrument sensitivity is modeled as a linear function of the atom impact speed onto the sensor’s conversion surface separately for each ESA step of the instrument. We find that the sensitivity increases from lower to higher ESA steps, but within each of the ESA steps it is a decreasing function of the atom impact speed. This result may be influenced by the hydrogen contribution, which was not included in the adopted model, but seems to exist in the signal. We conclude that the currently accepted temperature of ISN He and velocity of the Sun through the interstellar medium do not need a revision, and we sketch a plan of further data analysis aiming at investigating ISN H and a better understanding of the population of ISN He originating in the outer heliosheath.
Drupsteen, Linda; Groeneweg, Jop; Zwetsloot, Gerard I J M
2013-01-01
Many incidents have occurred because organisations have failed to learn from lessons of the past. This means that there is room for improvement in the way organisations analyse incidents, generate measures to remedy identified weaknesses and prevent reoccurrence: the learning from incidents process. To improve that process, it is necessary to gain insight into the steps of this process and to identify factors that hinder learning (bottlenecks). This paper presents a model that enables organisations to analyse the steps in a learning from incidents process and to identify the bottlenecks. The study describes how this model is used in a survey and in 3 exploratory case studies in The Netherlands. The results show that there is limited use of learning potential, especially in the evaluation stage. To improve learning, an approach that considers all steps is necessary.
Geometry-based across wafer process control in a dual damascene scenario
NASA Astrophysics Data System (ADS)
Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Thrun, Xaver
2018-03-01
Dual damascene is an established patterning process for back-end-of-line to generate copper interconnects and lines. One of the critical output parameters is the electrical resistance of the metal lines. In our 200 mm line, this is currently being controlled by a feed-forward control from the etch process to the final step in the CMP process. In this paper, we investigate the impact of alternative feed-forward control using a calibrated physical model that estimates the impact on electrical resistance of the metal lines* . This is done by simulation on a large set of wafers. Three different approaches are evaluated, one of which uses different feed-forward settings for different radial zones in the CMP process.
NASA Astrophysics Data System (ADS)
Chen, Y.; Ho, C.; Chang, L.
2011-12-01
In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.
Two-Step Amyloid Aggregation: Sequential Lag Phase Intermediates
NASA Astrophysics Data System (ADS)
Castello, Fabio; Paredes, Jose M.; Ruedas-Rama, Maria J.; Martin, Miguel; Roldan, Mar; Casares, Salvador; Orte, Angel
2017-01-01
The self-assembly of proteins into fibrillar structures called amyloid fibrils underlies the onset and symptoms of neurodegenerative diseases, such as Alzheimer’s and Parkinson’s. However, the molecular basis and mechanism of amyloid aggregation are not completely understood. For many amyloidogenic proteins, certain oligomeric intermediates that form in the early aggregation phase appear to be the principal cause of cellular toxicity. Recent computational studies have suggested the importance of nonspecific interactions for the initiation of the oligomerization process prior to the structural conversion steps and template seeding, particularly at low protein concentrations. Here, using advanced single-molecule fluorescence spectroscopy and imaging of a model SH3 domain, we obtained direct evidence that nonspecific aggregates are required in a two-step nucleation mechanism of amyloid aggregation. We identified three different oligomeric types according to their sizes and compactness and performed a full mechanistic study that revealed a mandatory rate-limiting conformational conversion step. We also identified the most cytotoxic species, which may be possible targets for inhibiting and preventing amyloid aggregation.
Persistent Step-Flow Growth of Strained Films on Vicinal Substrates
NASA Astrophysics Data System (ADS)
Hong, Wei; Lee, Ho Nyung; Yoon, Mina; Christen, Hans M.; Lowndes, Douglas H.; Suo, Zhigang; Zhang, Zhenyu
2005-08-01
We propose a model of persistent step flow, emphasizing dominant kinetic processes and strain effects. Within this model, we construct a morphological phase diagram, delineating a regime of step flow from regimes of step bunching and island formation. In particular, we predict the existence of concurrent step bunching and island formation, a new growth mode that competes with step flow for phase space, and show that the deposition flux and temperature must be chosen within a window in order to achieve persistent step flow. The model rationalizes the diverse growth modes observed in pulsed laser deposition of SrRuO3 on SrTiO3.
Stabilization of a three-dimensional limit cycle walking model through step-to-step ankle control.
Kim, Myunghee; Collins, Steven H
2013-06-01
Unilateral, below-knee amputation is associated with an increased risk of falls, which may be partially related to a loss of active ankle control. If ankle control can contribute significantly to maintaining balance, even in the presence of active foot placement, this might provide an opportunity to improve balance using robotic ankle-foot prostheses. We investigated ankle- and hip-based walking stabilization methods in a three-dimensional model of human gait that included ankle plantarflexion, ankle inversion-eversion, hip flexion-extension, and hip ad/abduction. We generated discrete feedback control laws (linear quadratic regulators) that altered nominal actuation parameters once per step. We used ankle push-off, lateral ankle stiffness and damping, fore-aft foot placement, lateral foot placement, or all of these as control inputs. We modeled environmental disturbances as random, bounded, unexpected changes in floor height, and defined balance performance as the maximum allowable disturbance value for which the model walked 500 steps without falling. Nominal walking motions were unstable, but were stabilized by all of the step-to-step control laws we tested. Surprisingly, step-by-step modulation of ankle push-off alone led to better balance performance (3.2% leg length) than lateral foot placement (1.2% leg length) for these control laws. These results suggest that appropriate control of robotic ankle-foot prosthesis push-off could make balancing during walking easier for individuals with amputation.
Calculation of muscle loading and joint contact forces during the rock step in Irish dance.
Shippen, James M; May, Barbara
2010-01-01
A biomechanical model for the analysis of dancers and their movements is described. The model consisted of 31 segments, 35 joints, and 539 muscles, and was animated using movement data obtained from a three-dimensional optical tracking system that recorded the motion of dancers. The model was used to calculate forces within the muscles and contact forces at the joints of the dancers in this study. Ground reaction forces were measured using force plates mounted in a sprung floor. The analysis procedure is generic and can be applied to any dance form. As an exemplar of the application process an Irish dance step, the rock, was analyzed. The maximum ground reaction force found was 4.5 times the dancer's body weight. The muscles connected to the Achilles tendon experienced a maximum force comparable to their maximal isometric strength. The contact force at the ankle joint was 14 times body weight, of which the majority of the force was due to muscle contraction. It is suggested that as the rock step produces high forces, and therefore the potential to cause injury, its use should be carefully monitored.
Making DATA Work: A Process for Conducting Action Research
ERIC Educational Resources Information Center
Young, Anita; Kaffenberger, Carol
2013-01-01
This conceptual model introduces a process to help school counselors use data to drive decision making and offers examples to implement the process. A step-by-step process is offered to help school counselors and school counselor supervisors address educational issues, close achievement gaps, and demonstrate program effectiveness. To illustrate…
How to Develop Children as Researchers: A Step-by-Step Guide to Teaching the Research Process
ERIC Educational Resources Information Center
Kellett, Mary
2005-01-01
The importance of research in professional and personal development is increasingly being acknowledged. So why should children not benefit in a similar way? Traditionally, children have been excluded from this learning process because research methodology is considered too difficult for them. Principal obstacles focus around three key barriers:…
NASA Astrophysics Data System (ADS)
Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra
2018-01-01
Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.
Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun
2015-09-01
In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
NASA Astrophysics Data System (ADS)
Safarzade, Zohre; Fathi, Reza; Shojaei Akbarabadi, Farideh; Bolorizadeh, Mohammad A.
2018-04-01
The scattering of a completely bare ion by atoms larger than hydrogen is at least a four-body interaction, and the charge transfer channel involves a two-step process. Amongst the two-step interactions of the high-velocity single charge transfer in an anion-atom collision, there is one whose amplitude demonstrates a peak in the angular distribution of the cross sections. This peak, the so-called Thomas peak, was predicted by Thomas in a two-step interaction, classically, which could also be described through three-body quantum mechanical models. This work discusses a four-body quantum treatment of the charge transfer in ion-atom collisions, where two-step interactions illustrating a Thomas peak are emphasized. In addition, the Pauli exclusion principle is taken into account for the initial and final states as well as the operators. It will be demonstrated that there is a momentum condition for each two-step interaction to occur in a single charge transfer channel, where new classical interactions lead to the Thomas mechanism.
A DACE study on a three stage metal forming process made of Sandvik Nanoflex™
NASA Astrophysics Data System (ADS)
Post, J.; Klaseboer, G.; Stinstra, E.; Huétink, J.
2004-06-01
Sandvik Nanoflex™ combines good corrosion resistance with high strength. The steel has good deformability in austenitic conditions. This material belongs to the group of metastable austenites, so during deformation a strain-induced transformation into martensite takes place. After deformation, the transformation continues as a result of internal residual stresses. Depending on the heat treatment, this stress-assisted transformation is more or less autocatalytic. Both transformations are stress-state, temperature and crystal orientation dependent. This article presents a constitutive model for this steel, based on the macroscopic material behaviour measured by inductive measurements. Both the stress-assisted and the strain-induced transformation to martensite are incorporated in this model. Path-dependent work hardening is also taken into account, together with the inheritance of the dislocations from one phase to the other. The model is implemented in an internal Philips code called CRYSTAL for doing simulations. A multi-stage metal forming process is simulated. The process consists of different forming steps with intervals between them to simulate the waiting time between the different metal forming steps. During the engineering process of a high precision metal formed product often questions arise about the relation between the scatter on the initial parameters, like standard deviation on the strip thickness, yield stress etc, and the product accuracy. This becomes even more complex if the material is: • instable, • the transformation rate depends on the stress state, which is related to friction, • the transformation rate depends on the temperature, which is related to deformation heat and the heat distribution during the entire process. A way to get more understanding in these phenomena in relation to the process is doing a process window study, using DACE (Design and Analysis of Computer Experiments). In this article an example is given how to make a DACE study on a a three stage metal forming process, using a distributed computing technique. The method is shown, together with some results. The problem is focused on the influence of the transformation rate, transformation plasticity and dilatation strain on the product accuracy.
A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams
NASA Astrophysics Data System (ADS)
Molnar, P.
2012-04-01
Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.
Simplified 4-Step Transportation Planning Process For Any Sized Area
DOT National Transportation Integrated Search
1999-01-01
This paper presents a streamlined version of the Washington, D.C. region's : 4-step travel demand forecasting model. The purpose for streamlining the : model was to have a model that could: replicate the regional model, and be run : in a new s...
Changing Instructional Practices through Technology Training, Part 2 of 2.
ERIC Educational Resources Information Center
Seamon, Mary
2001-01-01
This second of a two-part article introducing the steps in a school district's teacher professional development model discusses steps three through six: Web page or project; Internet Discovery (with its five phases-question, search, interpretation, composition, sharing); Cyberinquiry; and WebQuests. Three examples are included: Web Page…
A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.
Richter, Mathis; Lins, Jonas; Schöner, Gregor
2017-01-01
Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition. Copyright © 2017 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perry, William L; Gunderson, Jake A; Dickson, Peter M
There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problemmore » of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et al., deduced the kinetics and thermodynamics of the phase transition, providing Dickson, et al. with the information necessary to develop a four-step model that included a two-step nucleation and growth mechanism for the {beta}-{delta} phase transition. Initially, an irreversible scheme was proposed. That model accurately predicted the spatial and temporal cook off behavior of the small-scale radial experiment under slow heating conditions, but did not accurately capture the endothermic phase transition at a faster heating rate. The current version of the four-step model includes reversibility and accurately describes the small-scale radial experiment over a wide range of heating rates. We have observed impact-induced friction ignition of PBX 9501 with grit embedded between the explosive and the lower anvil surface. Observation was done using an infrared camera looking through the sapphire bottom anvil. Time to ignition and temperature-time behavior were recorded. The time to ignition was approximately 500 microseconds and the temperature was approximately 1000 K. The four step reversible kinetic scheme was previously validated for slow cook off scenarios. Our intention was to test the validity for significantly faster hot-spot processes, such as the impact-induced grit friction process studied here. We found the model predicted the ignition time within experimental error. There are caveats to consider when evaluating the agreement. The primary input to the model was friction work over an area computed by a stress analysis. The work rate itself, and the relative velocity of the grit and substrate both have a strong dependence on the initial position of the grit. Any errors in the analysis or the initial grit position would affect the model results. At this time, we do not know the sensitivity to these issues. However, the good agreement does suggest the four step kinetic scheme may have universal applicability for HMX systems.« less
Vitorazi, L; Ould-Moussa, N; Sekar, S; Fresnais, J; Loh, W; Chapel, J-P; Berret, J-F
2014-12-21
Recent studies have pointed out the importance of polyelectrolyte assembly in the elaboration of innovative nanomaterials. Beyond their structures, many important questions on the thermodynamics of association remain unanswered. Here, we investigate the complexation between poly(diallyldimethylammonium chloride) (PDADMAC) and poly(sodium acrylate) (PANa) chains using a combination of three techniques: isothermal titration calorimetry (ITC), static and dynamic light scattering and electrophoresis. Upon addition of PDADMAC to PANa or vice-versa, the results obtained by the different techniques agree well with each other, and reveal a two-step process. The primary process is the formation of highly charged polyelectrolyte complexes of size 100 nm. The secondary process is the transition towards a coacervate phase made of rich and poor polymer droplets. The binding isotherms measured are accounted for using a phenomenological model that provides the thermodynamic parameters for each reaction. Small positive enthalpies and large positive entropies consistent with a counterion release scenario are found throughout this study. Furthermore, this work stresses the importance of the underestimated formulation pathway or mixing order in polyelectrolyte complexation.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Ultramap: the all in One Photogrammetric Solution
NASA Astrophysics Data System (ADS)
Wiechert, A.; Gruber, M.; Karner, K.
2012-07-01
This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.
Ko, Jordon; Su, Wen-Jun; Chien, I-Lung; Chang, Der-Ming; Chou, Sheng-Hsin; Zhan, Rui-Yu
2010-02-01
The rice straw, an agricultural waste from Asians' main provision, was collected as feedstock to convert cellulose into ethanol through the enzymatic hydrolysis and followed by the fermentation process. When the two process steps are performed sequentially, it is referred to as separate hydrolysis and fermentation (SHF). The steps can also be performed simultaneously, i.e., simultaneous saccharification and fermentation (SSF). In this research, the kinetic model parameters of the cellulose saccharification process step using the rice straw as feedstock is obtained from real experimental data of cellulase hydrolysis. Furthermore, this model can be combined with a fermentation model at high glucose and ethanol concentrations to form a SSF model. The fermentation model is based on cybernetic approach from a paper in the literature with an extension of including both the glucose and ethanol inhibition terms to approach more to the actual plants. Dynamic effects of the operating variables in the enzymatic hydrolysis and the fermentation models will be analyzed. The operation of the SSF process will be compared to the SHF process. It is shown that the SSF process is better in reducing the processing time when the product (ethanol) concentration is high. The means to improve the productivity of the overall SSF process, by properly using aeration during the batch operation will also be discussed.
A Microstructure-Based Constitutive Model for Superplastic Forming
NASA Astrophysics Data System (ADS)
Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel
2012-11-01
A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Sommer, Johanna; Lanier, Cédric; Perron, Noelle Junod; Nendaz, Mathieu; Clavet, Diane; Audétat, Marie-Claude
2016-04-01
The aim of this study was to develop a descriptive tool for peer review of clinical teaching skills. Two analogies framed our research: (1) between the patient-centered and the learner-centered approach; (2) between the structures of clinical encounters (Calgary-Cambridge communication model) and teaching sessions. During the course of one year, each step of the action research was carried out in collaboration with twelve clinical teachers from an outpatient general internal medicine clinic and with three experts in medical education. The content validation consisted of a literature review, expert opinion and the participatory research process. Interrater reliability was evaluated by three clinical teachers coding thirty audiotaped standardized learner-teacher interactions. This tool contains sixteen items covering the process and content of clinical supervisions. Descriptors define the expected teaching behaviors for three levels of competence. Interrater reliability was significant for eleven items (Kendall's coefficient p<0.05). This peer assessment tool has high reliability and can be used to facilitate the acquisition of teaching skills. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Human body motion capture from multi-image video sequences
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2003-01-01
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
Modelling Feedback in Virtual Patients: An Iterative Approach.
Stathakarou, Natalia; Kononowicz, Andrzej A; Henningsohn, Lars; McGrath, Cormac
2018-01-01
Virtual Patients (VPs) offer learners the opportunity to practice clinical reasoning skills and have recently been integrated in Massive Open Online Courses (MOOCs). Feedback is a central part of a branched VP, allowing the learner to reflect on the consequences of their decisions and actions. However, there is insufficient guidance on how to design feedback models within VPs and especially in the context of their application in MOOCs. In this paper, we share our experiences from building a feedback model for a bladder cancer VP in a Urology MOOC, following an iterative process in three steps. Our results demonstrate how we can systematize the process of improving the quality of VP components by the application of known literature frameworks and extend them with a feedback module. We illustrate the design and re-design process and exemplify with content from our VP. Our results can act as starting point for discussions on modelling feedback in VPs and invite future research on the topic.
Khan, Md Abdul Shafeeuulla; Ganguly, Bishwajit
2012-05-01
Oximate anions are used as potential reactivating agents for OP-inhibited AChE because of they possess enhanced nucleophilic reactivity due to the α-effect. We have demonstrated the process of reactivating the VX-AChE adduct with formoximate and hydroxylamine anions by applying the DFT approach at the B3LYP/6-311 G(d,p) level of theory. The calculated results suggest that the hydroxylamine anion is more efficient than the formoximate anion at reactivating VX-inhibited AChE. The reaction of formoximate anion and the VX-AChE adduct is a three-step process, while the reaction of hydroxylamine anion with the VX-AChE adduct seems to be a two-step process. The rate-determining step in the process is the initial attack on the VX of the VX-AChE adduct by the nucleophile. The subsequent steps are exergonic in nature. The potential energy surface (PES) for the reaction of the VX-AChE adduct with hydroxylamine anion reveals that the reactivation process is facilitated by the lower free energy of activation (by a factor of 1.7 kcal mol(-1)) than that of the formoximate anion at the B3LYP/6-311 G(d,p) level of theory. The higher free energy of activation for the reverse reactivation reaction between hydroxylamine anion and the VX-serine adduct further suggests that the hydroxylamine anion is a very good antidote agent for the reactivation process. The activation barriers calculated in solvent using the polarizable continuum model (PCM) for the reactivation of the VX-AChE adduct with hydroxylamine anion were also found to be low. The calculated results suggest that V-series compounds can be more toxic than G-series compounds, which is in accord with earlier experimental observations.
Development of a career coaching model for medical students
Hur, Yera
2016-01-01
Purpose: Deciding on a future career path or choosing a career specialty is an important academic decision for medical students. The purpose of this study is to develop a career coaching model for medical students. Methods: This research was carried out in three steps. The first step was systematic review of previous studies. The second step was a need assessment of medical students. The third step was a career coaching model using the results acquired from the researched literature and the survey. Results: The career coaching stages were defined as three big phases: The career coaching stages were defined as the “crystallization” period (Pre-medical year 1 and 2), “specification” period (medical year 1 and 2), and “implementation” period (medical year 3 and 4). Conclusion: The career coaching model for medical students can be used in programming career coaching contents and also in identifying the outcomes of career coaching programs at an institutional level. PMID:26867586
Adsorption-desorption behavior of atrazine on agricultural soils in China.
Yue, Lin; Ge, ChengJun; Feng, Dan; Yu, Huamei; Deng, Hui; Fu, Bomin
2017-07-01
Adsorption and desorption are important processes that affect atrazine transport, transformation, and bioavailability in soils. In this study, the adsorption-desorption characteristics of atrazine in three soils (laterite, paddy soil and alluvial soil) were evaluated using the batch equilibrium method. The results showed that the kinetics of atrazine in soils was completed in two steps: a "fast" adsorption and a "slow" adsorption and could be well described by pseudo-second-order model. In addition, the adsorption equilibrium isotherms were nonlinear and were well fitted by Freundlich and Langmuir models. It was found that the adsorption data on laterite, and paddy soil were better fitted by the Freundlich model; as for alluvial soil, the Langmuir model described it better. The maximum atrazine sorption capacities ranked as follows: paddy soil>alluvial soil>laterite. Results of thermodynamic calculations indicated that atrazine adsorption on three tested soils was spontaneous and endothermic. The desorption data showed that negative hysteresis occurred. Furthermore, lower solution pH value was conducive to the adsorption of atrazine in soils. The atrazine adsorption in these three tested soils was controlled by physical adsorption, including partition and surface adsorption. At lower equilibrium concentration, the atrazine adsorption process in soils was dominated by surface adsorption; while with the increase of equilibrium concentration, partition was predominant. Copyright © 2016. Published by Elsevier B.V.
Li, Jianyou; Tanaka, Hiroya
2018-01-01
Traditional splinting processes are skill dependent and irreversible, and patient satisfaction levels during rehabilitation are invariably lowered by the heavy structure and poor ventilation of splints. To overcome this drawback, use of the 3D-printing technology has been proposed in recent years, and there has been an increase in public awareness. However, application of 3D-printing technologies is limited by the low CAD proficiency of clinicians as well as unforeseen scan flaws within anatomic models.A programmable modeling tool has been employed to develop a semi-automatic design system for generating a printable splint model. The modeling process was divided into five stages, and detailed steps involved in construction of the proposed system as well as automatic thickness calculation, the lattice structure, and assembly method have been thoroughly described. The proposed approach allows clinicians to verify the state of the splint model at every stage, thereby facilitating adjustment of input content and/or other parameters to help solve possible modeling issues. A finite element analysis simulation was performed to evaluate the structural strength of generated models. A fit investigation was applied on fabricated splints and volunteers to assess the wearing experience. Manual modeling steps involved in complex splint designs have been programed into the proposed automatic system. Clinicians define the splinting region by drawing two curves, thereby obtaining the final model within minutes. The proposed system is capable of automatically patching up minor flaws within the limb model as well as calculating the thickness and lattice density of various splints. Large splints could be divided into three parts for simultaneous multiple printing. This study highlights the advantages, limitations, and possible strategies concerning application of programmable modeling tools in clinical processes, thereby aiding clinicians with lower CAD proficiencies to become adept with splint design process, thus improving the overall design efficiency of 3D-printed splints.
Regenerative life support system research
NASA Technical Reports Server (NTRS)
1988-01-01
Sections on modeling, experimental activities during the grant period, and topics under consideration for the future are contained. The sessions contain discussions of: four concurrent modeling approaches that were being integrated near the end of the period (knowledge-based modeling support infrastructure and data base management, object-oriented steady state simulations for three concepts, steady state mass-balance engineering tradeoff studies, and object-oriented time-step, quasidynamic simulations of generic concepts); interdisciplinary research activities, beginning with a discussion of RECON lab development and use, and followed with discussions of waste processing research, algae studies and subsystem modeling, low pressure growth testing of plants, subsystem modeling of plants, control of plant growth using lighting and CO2 supply as variables, search for and development of lunar soil simulants, preliminary design parameters for a lunar base life support system, and research considerations for food processing in space; and appendix materials, including a discussion of the CELSS Conference, detailed analytical equations for mass-balance modeling, plant modeling equations, and parametric data on existing life support systems for use in modeling.
Artificial neural network modelling of a large-scale wastewater treatment plant operation.
Güçlü, Dünyamin; Dursun, Sükrü
2010-11-01
Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.
A three step supercritical process to improve the dissolution rate of eflucimibe.
Rodier, Elisabeth; Lochard, Hubert; Sauceau, Martial; Letourneau, Jean-Jacques; Freiss, Bernard; Fages, Jacques
2005-10-01
The aim of this study is to improve the dissolution properties of a poorly-soluble active substance, Eflucimibe by associating it with gamma-cyclodextrin. To achieve this objective, a new three-step process based on supercritical fluid technology has been proposed. First, Eflucimibe and cyclodextrin are co-crystallized using an anti-solvent process, dimethylsulfoxide being the solvent and supercritical carbon dioxide being the anti-solvent. Second, the co-crystallized powder is held in a static mode under supercritical conditions for several hours. This is the maturing step. Third, in a final stripping step, supercritical CO(2) is flowed through the matured powder to extract the residual solvent. The coupling of the first two steps brings about a significant synergistic effect to improve the dissolution rate of the drug. The nature of the entity obtained at the end of each step is discussed and some suggestions are made as to what happens in these operations. It is shown the co-crystallization ensures a good dispersion of both compounds and is rather insensitive to the operating parameters tested. The maturing step allows some dissolution-recrystallization to occur thus intensifying the intimate contact between the two compounds. Addition of water is necessary to make maturing effective as this is governed by the transfer properties of the medium. The stripping step allows extraction of the residual solvent but also removes some of the Eflucimibe which is the main drawback of this final stage.
Method and apparatus for automated assembly
Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.
1999-01-01
A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.
Computational experience with a three-dimensional rotary engine combustion model
NASA Astrophysics Data System (ADS)
Raju, M. S.; Willis, E. A.
1990-04-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
Computational experience with a three-dimensional rotary engine combustion model
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1990-01-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
A Model for Dissolution of Lime in Steelmaking Slags
NASA Astrophysics Data System (ADS)
Sarkar, Rahul; Roy, Ushasi; Ghosh, Dinabandhu
2016-08-01
In a previous study by Sarkar et al. (Metall. Mater. Trans. B 46B:961 2015), a dynamic model of the LD steelmaking was developed. The prediction of the previous model (Sarkar et al. in Metall. Mater. Trans. B 46B:961 2015) for the bath (metal) composition matched well with the plant data (Cicutti et al. in Proceedings of 6th International Conference on Molten Slags, Fluxes and Salts, Stockholm City, 2000). However, with respect to the slag composition, the prediction was not satisfactory. The current study aims to improve upon the previous model Sarkar et al. (Metall. Mater. Trans. B 46B:961 2015) by incorporating a lime dissolution submodel into the earlier one. From the industrial point of view, the understanding of the lime dissolution kinetics is important to meet the ever-increasing demand of producing low-P steel at a low basicity. In the current study, three-step kinetics for the lime dissolution is hypothesized on the assumption that a solid layer of 2CaO·SiO2 should form around the unreacted core of the lime. From the available experimental data, it seems improbable that the observed kinetics should be controlled singly by any one kinetic step. Accordingly, a general, mixed control model has been proposed to calculate the dissolution rate of the lime under varying slag compositions and temperatures. First, the rate equation for each of the three rate-controlling steps has been derived, for three different lime geometries. Next, the rate equation for the mixed control kinetics has been derived and solved to find the dissolution rate. The model predictions have been validated by means of the experimental data available in the literature. In addition, the effects of the process conditions on the dissolution rate have been studied, and compared with the experimental results wherever possible. Incorporation of this submodel into the earlier global model (Sarkar et al. in Metall. Mater. Trans. B 46B:961 2015) enables the prediction of the lime dissolution rate in the dynamic system of LD steelmaking. In addition, with the inclusion of this submodel, significant improvement in the prediction of the slag composition during the main blow period has been observed.
NASA Astrophysics Data System (ADS)
Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara
2015-09-01
Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.
Computer Processing Of Tunable-Diode-Laser Spectra
NASA Technical Reports Server (NTRS)
May, Randy D.
1991-01-01
Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.
Soliman, Moomen; Eldyasti, Ahmed
2017-06-01
Recently, partial nitrification has been adopted widely either for the nitrite shunt process or intermediate nitrite generation step for the Anammox process. However, partial nitrification has been hindered by the complexity of maintaining stable nitrite accumulation at high nitrogen loading rates (NLR) which affect the feasibility of the process for high nitrogen content wastewater. Thus, the operational data of a lab scale SBR performing complete partial nitrification as a first step of nitrite shunt process at NLRs of 0.3-1.2kg/(m 3 d) have been used to calibrate and validate a process model developed using BioWin® in order to describe the long-term dynamic behavior of the SBR. Moreover, an identifiability analysis step has been introduced to the calibration protocol to eliminate the needs of the respirometric analysis for SBR models. The calibrated model was able to predict accurately the daily effluent ammonia, nitrate, nitrite, alkalinity concentrations and pH during all different operational conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Cihak, David F.; Bowlin, Tammy
2009-01-01
The researchers examined the use of video modeling by means of a handheld computer as an alternative instructional delivery system for learning basic geometry skills. Three high school students with learning disabilities participated in this study. Through video modeling, teacher-developed video clips showing step-by-step problem solving processes…
The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...
Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover
NASA Technical Reports Server (NTRS)
Dangelo, K. R.
1974-01-01
A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.
Modeling behavior dynamics using computational psychometrics within virtual worlds.
Cipresso, Pietro
2015-01-01
In case of fire in a building, how will people behave in the crowd? The behavior of each individual affects the behavior of others and, conversely, each one behaves considering the crowd as a whole and the individual others. In this article, I propose a three-step method to explore a brand new way to study behavior dynamics. The first step relies on the creation of specific situations with standard techniques (such as mental imagery, text, video, and audio) and an advanced technique [Virtual Reality (VR)] to manipulate experimental settings. The second step concerns the measurement of behavior in one, two, or many individuals focusing on parameters extractions to provide information about the behavior dynamics. Finally, the third step, which uses the parameters collected and measured in the previous two steps in order to simulate possible scenarios to forecast through computational models, understand, and explain behavior dynamics at the social level. An experimental study was also included to demonstrate the three-step method and a possible scenario.
Three basic principles of success.
Levin, Roger
2003-06-01
Basic business principles all but ensure success when they are followed consistently. Putting strategies, objectives and tactics in place is the first step toward being able to document systems, initiate scripting and improve staff training. Without the basic steps, systems, scripting and training the practice for performance would be hit or miss, at best. More importantly, applying business principles ensures that limited practice resources are dedicated to the achievement of the strategy. By following this simple, three-step process, a dental practice can significantly enhance both financial success and dentist and staff satisfaction.
NASA Astrophysics Data System (ADS)
Linke, Bernd M.; Gerber, Thomas; Hatscher, Ansgar; Salvatori, Ilaria; Aranguren, Iñigo; Arribas, Maribel
2018-01-01
Based on 22MnB5 hot stamping steel, three model alloys containing 0.5, 0.8, and 1.5 wt pct Si were produced, heat treated by quenching and partitioning (Q&P), and characterized. Aided by DICTRA calculations, the thermal Q&P cycles were designed to fit into industrial hot stamping by keeping partitioning times ≤ 30 seconds. As expected, Si increased the amount of retained austenite (RA) stabilized after final cooling. However, for the intermediate Si alloy the heat treatment exerted a particularly pronounced influence with an RA content three times as high for the one-step process compared to the two-step process. It appeared that 0.8 wt pct Si sufficed to suppress direct cementite formation from within martensite laths but did not sufficiently stabilize carbon-soaked RA at higher temperatures. Tensile and bending tests showed strongly diverging effects of austenite on ductility. Total elongation improved consistently with increasing RA content independently from its carbon content. In contrast, the bending angle was not impacted by high-carbon RA but deteriorated almost linearly with the amount of low-carbon RA.
The HERSCHEL/PACS early Data Products
NASA Astrophysics Data System (ADS)
Wieprecht, E.; Wetzstein, M.; Huygen, R.; Vandenbussche, B.; De Meester, W.
2006-07-01
ESA's Herschel Space Observatory to be launched in 2007, is the first space observatory covering the full far-infrared and submillimeter wavelength range (60 - 670 microns). The Photodetector Array Camera & Spectrometer (PACS) is one of the three science instruments. It contains two Ge:Ga photoconductor arrays and two bolometer arrays to perform imaging line spectroscopy and imaging photometry in the 60 - 210 micron wavelength band. The HERSCHEL ground segment (Herschel Common Science System - HCSS) is implemented using JAVA technology and written in a common effort by the HERSCHEL Science Center and the three instrument teams. The PACS Common Software System (PCSS) is based on the HCSS and used for the online and offline analysis of PACS data. For telemetry bandwidth reasons PACS science data are partially processed on board, compressed, cut into telemetry packets and transmitted to the ground. These steps are instrument mode dependent. We will present the software model which allows to reverse the discrete on board processing steps and evaluate the data. After decompression and reconstruction the detector data and instrument status information are organized in two main PACS Products. The design of these JAVA classes considers the individual sampling rates, data formats, memory and performance optimization aspects and comfortable user interfaces.
Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)
ERIC Educational Resources Information Center
Bogartz, Richard S.; Staub, Adrian
2012-01-01
In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…
Variability in the Length and Frequency of Steps of Sighted and Visually Impaired Walkers
ERIC Educational Resources Information Center
Mason, Sarah J.; Legge, Gordon E.; Kallie, Christopher S.
2005-01-01
The variability of the length and frequency of steps was measured in sighted and visually impaired walkers at three different paces. The variability was low, especially at the preferred pace, and similar for both groups. A model incorporating step counts and step frequency provides good estimates of the distance traveled. Applications to…
Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing
Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun
2016-01-01
Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267
Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing
NASA Astrophysics Data System (ADS)
Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun
2016-07-01
Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps).
Baik, Seong-Yi; Crabtree, Benjamin F; Gonzales, Junius J
2013-11-01
Depression is prevalent in primary care (PC) practices and poses a considerable public health burden in the United States. Despite nearly four decades of efforts to improve depression care quality in PC practices, a gap remains between desired treatment outcomes and the reality of how depression care is delivered. This article presents a real-world PC practice model of depression care, elucidating the processes and their influencing conditions. Grounded theory methodology was used for the data collection and analysis to develop a depression care model. Data were collected from 70 individual interviews (60 to 70 min each), three focus group interviews (n = 24, 2 h each), two surveys per clinician, and investigators' field notes on practice environments. Interviews were audiotaped and transcribed for analysis. Surveys and field notes complemented interview data. Seventy primary care clinicians from 52 PC offices in the Midwest: 28 general internists, 28 family physicians, and 14 nurse practitioners. A depression care model was developed that illustrates how real-world conditions infuse complexity into each step of the depression care process. Depression care in PC settings is mediated through clinicians' interactions with patients, practice, and the local community. A clinician's interactional familiarity ("familiarity capital") was a powerful facilitator for depression care. For the recognition of depression, three previously reported processes and three conditions were confirmed. For the management of depression, 13 processes and 11 conditions were identified. Empowering the patient was a parallel process to the management of depression. The clinician's ability to develop and utilize interactional relationships and resources needed to recognize and treat a person with depression is key to depression care in primary care settings. The interactional context of depression care makes empowering the patient central to depression care delivery.
A problem-solving routine for improving hospital operations.
Ghosh, Manimay; Sobek Ii, Durward K
2015-01-01
The purpose of this paper is to examine empirically why a systematic problem-solving routine can play an important role in the process improvement efforts of hospitals. Data on 18 process improvement cases were collected through semi-structured interviews, reports and other documents, and artifacts associated with the cases. The data were analyzed using a grounded theory approach. Adherence to all the steps of the problem-solving routine correlated to greater degrees of improvement across the sample. Analysis resulted in two models. The first partially explains why hospital workers tended to enact short-term solutions when faced with process-related problems; and tended not seek longer-term solutions that prevent problems from recurring. The second model highlights a set of self-reinforcing behaviors that are more likely to address problem recurrence and result in sustained process improvement. The study was conducted in one hospital setting. Hospital managers can improve patient care and increase operational efficiency by adopting and diffusing problem-solving routines that embody three key characteristics. This paper offers new insights on why caregivers adopt short-term approaches to problem solving. Three characteristics of an effective problem-solving routine in a healthcare setting are proposed.
Modeling human faces with multi-image photogrammetry
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2002-03-01
Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.
A Computer-Aided Type-II Fuzzy Image Processing for Diagnosis of Meniscus Tear.
Zarandi, M H Fazel; Khadangi, A; Karimi, F; Turksen, I B
2016-12-01
Meniscal tear is one of the prevalent knee disorders among young athletes and the aging population, and requires correct diagnosis and surgical intervention, if necessary. Not only the errors followed by human intervention but also the obstacles of manual meniscal tear detection highlight the need for automatic detection techniques. This paper presents a type-2 fuzzy expert system for meniscal tear diagnosis using PD magnetic resonance images (MRI). The scheme of the proposed type-2 fuzzy image processing model is composed of three distinct modules: Pre-processing, Segmentation, and Classification. λ-nhancement algorithm is used to perform the pre-processing step. For the segmentation step, first, Interval Type-2 Fuzzy C-Means (IT2FCM) is applied to the images, outputs of which are then employed by Interval Type-2 Possibilistic C-Means (IT2PCM) to perform post-processes. Second stage concludes with re-estimation of "η" value to enhance IT2PCM. Finally, a Perceptron neural network with two hidden layers is used for Classification stage. The results of the proposed type-2 expert system have been compared with a well-known segmentation algorithm, approving the superiority of the proposed system in meniscal tear recognition.
Conceptual analysis of Physiology of vision in Ayurveda.
Balakrishnan, Praveen; Ashwini, M J
2014-07-01
The process by which the world outside is seen is termed as visual process or physiology of vision. There are three phases in this visual process: phase of refraction of light, phase of conversion of light energy into electrical impulse and finally peripheral and central neurophysiology. With the advent of modern instruments step by step biochemical changes occurring at each level of the visual process has been deciphered. Many investigations have emerged to track these changes and helping to diagnose the exact nature of the disease. Ayurveda has described this physiology of vision based on the functions of vata and pitta. Philosophical textbook of ayurveda, Tarka Sangraha, gives certain basics facts of visual process. This article discusses the second and third phase of visual process. Step by step analysis of the visual process through the spectacles of ayurveda amalgamated with the basics of philosophy from Tarka Sangraha has been analyzed critically to generate a concrete idea regarding the physiology and hence thereby interpret the pathology on the grounds of ayurveda based on the investigative reports.
Modeling of protein binary complexes using structural mass spectrometry data
Kamal, J.K. Amisha; Chance, Mark R.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684
NASA Technical Reports Server (NTRS)
Dawson, John R
1936-01-01
The results of tank tests of three models of flying-boat hulls of the pointed-step type with different angles of dead rise are given in charts and are compared with results from tests of more conventional hulls. Increasing the angle of dead rise from 15 to 25 degrees: had little effect on the hump resistance; increased the resistance throughout the planning range; increased the best trim angle; reduced the maximum positive trimming moment required to obtain best trim angle; and had but a slight effect on the spray characteristics. For approximately the same angles of dead rise the resistance of the pointed-step hulls were considerably lower at high speeds than those of the more conventional hulls.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
2012-01-01
Abstract Recent progress in stem cell biology, notably cell fate conversion, calls for novel theoretical understanding for cell differentiation. The existing qualitative concept of Waddington’s “epigenetic landscape” has attracted particular attention because it captures subsequent fate decision points, thus manifesting the hierarchical (“tree-like”) nature of cell fate diversification. Here, we generalized a recent work and explored such a developmental landscape for a two-gene fate decision circuit by integrating the underlying probability landscapes with different parameters (corresponding to distinct developmental stages). The change of entropy production rate along the parameter changes indicates which parameter changes can represent a normal developmental process while other parameters’ change can not. The transdifferentiation paths over the landscape under certain conditions reveal the possibility of a direct and reversible phenotypic conversion. As the intensity of noise increases, we found that the landscape becomes flatter and the dominant paths more straight, implying the importance of biological noise processing mechanism in development and reprogramming. We further extended the landscape of the one-step fate decision to that for two-step decisions in central nervous system (CNS) differentiation. A minimal network and dynamic model for CNS differentiation was firstly constructed where two three-gene motifs are coupled. We then implemented the SDEs (Stochastic Differentiation Equations) simulation for the validity of the network and model. By integrating the two landscapes for the two switch gene pairs, we constructed the two-step development landscape for CNS differentiation. Our work provides new insights into cellular differentiation and important clues for better reprogramming strategies. PMID:23300518
Uncovering Oscillations, Complexity, and Chaos in Chemical Kinetics Using Mathematica
NASA Astrophysics Data System (ADS)
Ferreira, M. M. C.; Ferreira, W. C., Jr.; Lino, A. C. S.; Porto, M. E. G.
1999-06-01
Unlike reactions with no peculiar temporal behavior, in oscillatory reactions concentrations can rise and fall spontaneously in a cyclic or disorganized fashion. In this article, the software Mathematica is used for a theoretical study of kinetic mechanisms of oscillating and chaotic reactions. A first simple example is introduced through a three-step reaction, called the Lotka model, which exhibits a temporal behavior characterized by damped oscillations. The phase plane method of dynamic systems theory is introduced for a geometric interpretation of the reaction kinetics without solving the differential rate equations. The equations are later numerically solved using the built-in routine NDSolve and the results are plotted. The next example, still with a very simple mechanism, is the Lotka-Volterra model reaction, which oscillates indefinitely. The kinetic process and rate equations are also represented by a three-step reaction mechanism. The most important difference between this and the former reaction is that the undamped oscillation has two autocatalytic steps instead of one. The periods of oscillations are obtained by using the discrete Fourier transform (DFT)-a well-known tool in spectroscopy, although not so common in this context. In the last section, it is shown how a simple model of biochemical interactions can be useful to understand the complex behavior of important biological systems. The model consists of two allosteric enzymes coupled in series and activated by its own products. This reaction scheme is important for explaining many metabolic mechanisms, such as the glycolytic oscillations in muscles, yeast glycolysis, and the periodic synthesis of cyclic AMP. A few of many possible dynamic behaviors are exemplified through a prototype glycolytic enzymatic reaction proposed by Decroly and Goldbeter. By simply modifying the initial concentrations, limit cycles, chaos, and birhythmicity are computationally obtained and visualized.
Anokye, Nana Kwame; Pokhrel, Subhash; Buxton, Martin; Fox-Rushby, Julia
2013-06-01
Little is known about the correlates of meeting recommended levels of participation in physical activity (PA) and how this understanding informs public health policies on behaviour change. To analyse who meets the recommended level of participation in PA in males and females separately by applying 'process' modelling frameworks (single vs. sequential 2-step process). Using the Health Survey for England 2006, (n = 14 142; ≥ 16 years), gender-specific regression models were estimated using bivariate probit with selectivity correction and single probit models. A 'sequential, 2-step process' modelled participation and meeting the recommended level separately, whereas the 'single process' considered both participation and level together. In females, meeting the recommended level was associated with degree holders [Marginal effect (ME) = 0.013] and age (ME = -0.001), whereas in males, age was a significant correlate (ME = -0.003 to -0.004). The order of importance of correlates was similar across genders, with ethnicity being the most important correlate in both males (ME = -0.060) and females (ME = -0.133). In females, the 'sequential, 2-step process' performed better (ρ = -0.364, P < 0.001) than that in males (ρ = 0.154). The degree to which people undertake the recommended level of PA through vigorous activity varies between males and females, and the process that best predicts such decisions, i.e. whether it is a sequential, 2-step process or a single-step choice, is also different for males and females. Understanding this should help to identify subgroups that are less likely to meet the recommended level of PA (and hence more likely to benefit from any PA promotion intervention).
NASA Astrophysics Data System (ADS)
Mehdi, H.; Monier, G.; Hoggan, P. E.; Bideux, L.; Robert-Goumet, C.; Dubrovskii, V. G.
2018-01-01
The high density of interface and surface states that cause the strong Fermi pinning observed on GaAs surfaces can be reduced by depositing GaN ultra-thin films on GaAs. To further improve this passivation, it is necessary to investigate the nitridation phenomena by identifying the distinct steps occurring during the process and to understand and quantify the growth kinetics of GaAs nitridation under different conditions. Nitridation of the cleaned GaAs substrate was performed using N2 plasma source. Two approaches have been combined. Firstly, an AR-XPS (Angle Resolved X-ray Photoelectron Spectroscopy) study is carried out to determine the chemical environments of the Ga, As and N atoms and the composition depth profile of the GaN thin film which allow us to summarize the nitridation process in three steps. Moreover, the temperature and time treatment have been investigated and show a significant impact on the formation of the GaN layer. The second approach is a refined growth kinetic model which better describes the GaN growth as a function of the nitridation time. This model clarifies the exchange mechanism of arsenic with nitrogen atoms at the GaN/GaAs interface and the phenomenon of quasi-saturation of the process observed experimentally.
Lee, Charlotte Tsz-Sum; Doran, Diane Marie
2017-06-01
Patient safety is compromised by medical errors and adverse events related to miscommunications among healthcare providers. Communication among healthcare providers is affected by human factors, such as interpersonal relations. Yet, discussions of interpersonal relations and communication are lacking in healthcare team literature. This paper proposes a theoretical framework that explains how interpersonal relations among healthcare team members affect communication and team performance, such as patient safety. We synthesized studies from health and social science disciplines to construct a theoretical framework that explicates the links among these constructs. From our synthesis, we identified two relevant theories: framework on interpersonal processes based on social relation model and the theory of relational coordination. The former involves three steps: perception, evaluation, and feedback; and the latter captures relational communicative behavior. We propose that manifestations of provider relations are embedded in the third step of the framework on interpersonal processes: feedback. Thus, varying team-member relationships lead to varying collaborative behavior, which affects patient-safety outcomes via a change in team communication. The proposed framework offers new perspectives for understanding how workplace relations affect healthcare team performance. The framework can be used by nurses, administrators, and educators to improve patient safety, team communication, or to resolve conflicts.
Performance Enhancements Under Dual-task Conditions
NASA Technical Reports Server (NTRS)
Kramer, A. F.; Wickens, C. D.; Donchin, E.
1984-01-01
Research on dual-task performance has been concerned with delineating the antecedent conditions which lead to dual-task decrements. Capacity models of attention, which propose that a hypothetical resource structure underlies performance, have been employed as predictive devices. These models predict that tasks which require different processing resources can be more successfully time shared than tasks which require common resources. The conditions under which such dual-task integrality can be fostered were assessed in a study in which three factors likely to influence the integrality between tasks were manipulated: inter-task redundancy, the physical proximity of tasks and the task relevant objects. Twelve subjects participated in three experimental sessions in which they performed both single and dual-tasks. The primary task was a pursuit step tracking task. The secondary tasks required the discrimination between different intensities or different spatial positions of a stimulus. The results are discussed in terms of a model of dual-task integrality.
NASA Astrophysics Data System (ADS)
Saletti, M.; Molnar, P.; Hassan, M. A.
2017-12-01
Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
Coupling image processing and stress analysis for damage identification in a human premolar tooth.
Andreaus, U; Colloca, M; Iacoviello, D
2011-08-01
Non-carious cervical lesions are characterized by the loss of dental hard tissue at the cement-enamel junction (CEJ). Exceeding stresses are therefore generated in the cervical region of the tooth that cause disruption of the bonds between the hydroxyapatite crystals, leading to crack formation and eventual loss of enamel and the underlying dentine. Damage identification was performed by image analysis techniques and allowed to quantitatively assess changes in teeth. A computerized two-step procedure was generated and applied to the first left maxillary human premolar. In the first step, dental images were digitally processed by a segmentation method in order to identify the damage. The considered morphological properties were the enamel thickness and total area, the number of fragments in which the enamel is chipped. The information retrieved by the data processing of the section images allowed to orient the stress investigation toward selected portions of the tooth. In the second step, a three-dimensional finite element model based on CT images of both the tooth and the periodontal ligament was employed to compare the changes occurring in the stress distributions in normal occlusion and malocclusion. The stress states were analyzed exclusively in the critical zones designated in the first step. The risk of failure at the CEJ and of crack initiation at the dentin-enamel junction through the quantification of first and third principal stresses, von Mises stress, and normal and tangential stresses, were also estimated. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python
Wils, Stefan; Schutter, Erik De
2008-01-01
We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245
Phase diagram and criticality of the two-dimensional prisoner's dilemma model
NASA Astrophysics Data System (ADS)
Santos, M.; Ferreira, A. L.; Figueiredo, W.
2017-07-01
The stationary states of the prisoner's dilemma model are studied on a square lattice taking into account the role of a noise parameter in the decision-making process. Only first neighboring players—defectors and cooperators—are considered in each step of the game. Through Monte Carlo simulations we determined the phase diagrams of the model in the plane noise versus the temptation to defect for a large range of values of the noise parameter. We observed three phases: cooperators and defectors absorbing phases, and a coexistence phase between them. The phase transitions as well as the critical exponents associated with them were determined using both static and dynamical scaling laws.
Trainer, Asa; Hedberg, Thomas; Feeney, Allison Barnard; Fischer, Kevin; Rosche, Phil
2017-01-01
Advances in information technology triggered a digital revolution that holds promise of reduced costs, improved productivity, and higher quality. To ride this wave of innovation, manufacturing enterprises are changing how product definitions are communicated – from paper to models. To achieve industry's vision of the Model-Based Enterprise (MBE), the MBE strategy must include model-based data interoperability from design to manufacturing and quality in the supply chain. The Model-Based Definition (MBD) is created by the original equipment manufacturer (OEM) using Computer-Aided Design (CAD) tools. This information is then shared with the supplier so that they can manufacture and inspect the physical parts. Today, suppliers predominantly use Computer-Aided Manufacturing (CAM) and Coordinate Measuring Machine (CMM) models for these tasks. Traditionally, the OEM has provided design data to the supplier in the form of two-dimensional (2D) drawings, but may also include a three-dimensional (3D)-shape-geometry model, often in a standards-based format such as ISO 10303-203:2011 (STEP AP203). The supplier then creates the respective CAM and CMM models and machine programs to produce and inspect the parts. In the MBE vision for model-based data exchange, the CAD model must include product-and-manufacturing information (PMI) in addition to the shape geometry. Today's CAD tools can generate models with embedded PMI. And, with the emergence of STEP AP242, a standards-based model with embedded PMI can now be shared downstream. The on-going research detailed in this paper seeks to investigate three concepts. First, that the ability to utilize a STEP AP242 model with embedded PMI for CAD-to-CAM and CAD-to-CMM data exchange is possible and valuable to the overall goal of a more efficient process. Second, the research identifies gaps in tools, standards, and processes that inhibit industry's ability to cost-effectively achieve model-based-data interoperability in the pursuit of the MBE vision. Finally, it also seeks to explore the interaction between CAD and CMM processes and determine if the concept of feedback from CAM and CMM back to CAD is feasible. The main goal of our study is to test the hypothesis that model-based-data interoperability from CAD-to-CAM and CAD-to-CMM is feasible through standards-based integration. This paper presents several barriers to model-based-data interoperability. Overall, the project team demonstrated the exchange of product definition data between CAD, CAM, and CMM systems using standards-based methods. While gaps in standards coverage were identified, the gaps should not stop industry's progress toward MBE. The results of our study provide evidence in support of an open-standards method to model-based-data interoperability, which would provide maximum value and impact to industry. PMID:28691120
Three dimensional modeling and dynamic analysis of four-wheel-steering vehicles
NASA Astrophysics Data System (ADS)
Hu, Haiyan; Han, Qiang
2003-02-01
The paper presents a nonlinear dynamic model of 9 degrees of freedom for four-wheel-steering vehicles. Compared with those in previous studies, this model includes the pitch and roll of the vehicle body, the motion of 4 wheels in the accelerating or braking process, the nonlinear coupling of vehicle body and unsprung part, as well as the air drag and wind effect. As a result, the model can be used for the analysis of various maneuvers of the four-wheel-steering vehicles. In addition, the previous models can be considered as a special case of this model. The paper gives some case studies for the dynamic performance of a four-wheel-steering vehicle under step input and saw-tooth input of steering angle applied on the front wheels, respectively.
Densitometric evaluation of three intra-oral radiographic films.
Seeliger, J E; Prinsloo, J J
1989-05-01
The radiographic, or diagnostic, quality of the processed radiograph depends upon a number of factors, one of the most important being the characteristics of the film. The purpose of this study was to determine which of three intra-oral radiographic films, obtainable in this country, would give the best results in terms of density range, speed, contrast and base plus fog values. Agfa Dentus, Flow X-Ray and Kodak (all speed group D films) were exposed, using a calibrated G.E. 1000 x-ray generator at 65 kVp, 10 mA and 50 impulses (1 second) exposure time. The target-film-distance was 40 cm, the total filtration 2.0 mm Aluminium and the half-value layer 2.7 mm Aluminium equivalent. An aluminium step-wedge with 8 steps, in steps of 1.5 mm, and a natural premolar tooth, with a carious lesion, embedded in acrylic, were used as phantoms. An 8 mm-thick layer of base-plate wax and a 3 mm-thick lead plate were used to simulate tissue-scatter and prevent back-scatter, respectively. To determine the base plug fog value, an unexposed film from the same batch was processed simultaneously with each of the three films evaluated. All processing was done in a Dürr AC 245 L processor with automatic replenishment and a 6-minute cycle. The processing chemicals, viz., Kolchem High Stability X-ray developer and fixer, were mixed and used in strict accordance with the manufacturer's recommendations. The radiographic densities of each step of the step-wedge, and of carious and normal dentine of the phantom tooth, were determined by means of an RMI Digital Densitometer.(ABSTRACT TRUNCATED AT 250 WORDS)
Recovery of permittivity and depth from near-field data as a step toward infrared nanotomography.
Govyadinov, Alexander A; Mastel, Stefan; Golmar, Federico; Chuvilin, Andrey; Carney, P Scott; Hillenbrand, Rainer
2014-07-22
The increasing complexity of composite materials structured on the nanometer scale requires highly sensitive analytical tools for nanoscale chemical identification, ideally in three dimensions. While infrared near-field microscopy provides high chemical sensitivity and nanoscopic spatial resolution in two dimensions, the quantitative extraction of material properties of three-dimensionally structured samples has not been achieved yet. Here we introduce a method to perform rapid recovery of the thickness and permittivity of simple 3D structures (such as thin films and nanostructures) from near-field measurements, and provide its first experimental demonstration. This is accomplished via a novel nonlinear invertible model of the imaging process, taking advantage of the near-field data recorded at multiple harmonics of the oscillation frequency of the near-field probe. Our work enables quantitative nanoscale-resolved optical studies of thin films, coatings, and functionalization layers, as well as the structural analysis of multiphase materials, among others. It represents a major step toward the further goal of near-field nanotomography.
Varet, Hugo; Brillet-Guéguen, Loraine; Coppée, Jean-Yves; Dillies, Marie-Agnès
2016-01-01
Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.
Caballero, Santiago; Nieto, Sandra; Gajardo, Rodrigo; Jorquera, Juan I
2010-07-01
A new human liquid intravenous immunoglobulin product, Flebogamma DIF, has been developed. This IgG is purified from human plasma by cold ethanol fractionation, PEG precipitation and ion exchange chromatography. The manufacturing process includes three different specific pathogen clearance (inactivation/removal) steps: pasteurization, solvent/detergent treatment and Planova nanofiltration with a pore size of 20 nm. This study evaluates the pathogen clearance capacity of seven steps in the production process for a wide range of viruses through spiking experiments: the three specific steps mentioned above and also four more production steps. Infectivity of samples was measured using a Tissue Culture Infectious Dose assay (log(10) TCID(50)) or Plaque Forming Units assay (log(10) PFU). Validation studies demonstrated that each specific step cleared more than 4 log(10) for all viruses assayed. An overall viral clearance between > or =13.33 log(10) and > or =25.21 log(10), was achieved depending on the virus and the number of steps studied for each virus. It can be concluded that Flebogamma DIF has a very high viral safety profile. 2010 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
Modelization of three-layered polymer coated steel-strip ironing process using a neural network
NASA Astrophysics Data System (ADS)
Sellés, M. A.; Schmid, S. R.; Sánchez-Caballero, S.; Seguí, V. J.; Reig, M. J.; Pla, R.
2012-04-01
An alternative to the traditional can manufacturing process is to use plastic laminated rolled steels as base stocks. This material consist of pre-heated steel coils that are sandwiched between one or two sheets of polymer. The heated sheets are then immediately quenched, which yields a strong bond between the layers. Such polymer-coated steels were investigated by Jaworski [1,2] and Sellés [3], and found to be suitable for ironing with carefully controlled conditions. A novel multi-layer polymer coated steel has been developed for container applications. This material presents an interesting extension to previous research on polymer laminated steel in ironing, and offers several advantages over the previous material (Sellés [3]). This document shows a modelization for the ironing process (the most crucial step in can manufacturing) done by using a neural network
Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas
2017-03-01
Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.
Product analysis illuminates the final steps of IES deletion in Tetrahymena thermophila
Saveliev, Sergei V.; Cox, Michael M.
2001-01-01
DNA sequences (IES elements) eliminated from the developing macronucleus in the ciliate Tetrahymena thermophila are released as linear fragments, which have now been detected and isolated. A PCR-mediated examination of fragment end structures reveals three types of strand scission events, reflecting three steps in the deletion process. New evidence is provided for two steps proposed previously: an initiating double-stranded cleavage, and strand transfer to create a branched deletion intermediate. The fragment ends provide evidence for a previously uncharacterized third step: the branched DNA strand is cleaved at one of several defined sites located within 15–16 nucleotides of the IES boundary, liberating the deleted DNA in a linear form. PMID:11406601
Product analysis illuminates the final steps of IES deletion in Tetrahymena thermophila.
Saveliev, S V; Cox, M M
2001-06-15
DNA sequences (IES elements) eliminated from the developing macronucleus in the ciliate Tetrahymena thermophila are released as linear fragments, which have now been detected and isolated. A PCR-mediated examination of fragment end structures reveals three types of strand scission events, reflecting three steps in the deletion process. New evidence is provided for two steps proposed previously: an initiating double-stranded cleavage, and strand transfer to create a branched deletion intermediate. The fragment ends provide evidence for a previously uncharacterized third step: the branched DNA strand is cleaved at one of several defined sites located within 15-16 nucleotides of the IES boundary, liberating the deleted DNA in a linear form.
Measurement of the bystander intervention model for bullying and sexual harassment.
Nickerson, Amanda B; Aloe, Ariel M; Livingston, Jennifer A; Feeley, Thomas Hugh
2014-06-01
Although peer bystanders can exacerbate or prevent bullying and sexual harassment, research has been hindered by the absence of a validated assessment tool to measure the process and sequential steps of the bystander intervention model. A measure was developed based on the five steps of Latané and Darley's (1970) bystander intervention model applied to bullying and sexual harassment. Confirmatory factor analysis with a sample of 562 secondary school students confirmed the five-factor structure of the measure. Structural equation modeling revealed that all the steps were influenced by the previous step in the model, as the theory proposed. In addition, the bystander intervention measure was positively correlated with empathy, attitudes toward bullying and sexual harassment, and awareness of bullying and sexual harassment facts. This measure can be used for future research and to inform intervention efforts related to the process of bystander intervention for bullying and sexual harassment. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
User's guide to the Variably Saturated Flow (VSF) process to MODFLOW
Thoms, R. Brad; Johnson, Richard L.; Healy, Richard W.
2006-01-01
A new process for simulating three-dimensional (3-D) variably saturated flow (VSF) using Richards' equation has been added to the 3-D modular finite-difference ground-water model MODFLOW. Five new packages are presented here as part of the VSF Process--the Richards' Equation Flow (REF1) Package, the Seepage Face (SPF1) Package, the Surface Ponding (PND1) Package, the Surface Evaporation (SEV1) Package, and the Root Zone Evapotranspiration (RZE1) Package. Additionally, a new Adaptive Time-Stepping (ATS1) Package is presented for use by both the Ground-Water Flow (GWF) Process and VSF. The VSF Process allows simulation of flow in unsaturated media above the ground-water zone and facilitates modeling of ground-water/surface-water interactions. Model performance is evaluated by comparison to an analytical solution for one-dimensional (1-D) constant-head infiltration (Dirichlet boundary condition), field experimental data for a 1-D constant-head infiltration, laboratory experimental data for two-dimensional (2-D) constant-flux infiltration (Neumann boundary condition), laboratory experimental data for 2-D transient drainage through a seepage face, and numerical model results (VS2DT) of a 2-D flow-path simulation using realistic surface boundary conditions. A hypothetical 3-D example case also is presented to demonstrate the new capability using periodic boundary conditions (for example, daily precipitation) and varied surface topography over a larger spatial scale (0.133 square kilometer). The new model capabilities retain the modular structure of the MODFLOW code and preserve MODFLOW's existing capabilities as well as compatibility with commercial pre-/post-processors. The overall success of the VSF Process in simulating mixed boundary conditions and variable soil types demonstrates its utility for future hydrologic investigations. This report presents a new flow package implementing the governing equations for variably saturated ground-water flow, four new boundary condition packages unique to unsaturated flow, the Adaptive Time-Stepping Package for use with both the GWF Process and the new VSF Process, detailed descriptions of the input and output files for each package, and six simulation examples verifying model performance.
Janković, Bojan
2011-10-01
The non-isothermal pyrolysis kinetics of Acetocell (the organosolv) and Lignoboost® (kraft) lignins, in an inert atmosphere, have been studied by thermogravimetric analysis. Using isoconversional analysis, it was concluded that the apparent activation energy for all lignins strongly depends on conversion, showing that the pyrolysis of lignins is not a single chemical process. It was identified that the pyrolysis process of Acetocell and Lignoboost® lignin takes place over three reaction steps, which was confirmed by appearance of the corresponding isokinetic relationships (IKR). It was found that major pyrolysis stage of both lignins is characterized by stilbene pyrolysis reactions, which were subsequently followed by decomposition reactions of products derived from the stilbene pyrolytic process. It was concluded that non-isothermal pyrolysis of Acetocell and Lignoboost® lignins can be best described by n-th (n>1) reaction order kinetics, using the Weibull mixture model (as distributed reactivity model) with alternating shape parameters. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Vairo, Daniel M.
1998-01-01
The removal and installation of sting-mounted wind tunnel models in the National Transonic Facility (NTF) is a multi-task process having a large impact on the annual throughput of the facility. Approximately ten model removal and installation cycles occur annually at the NTF with each cycle requiring slightly over five days to complete. The various tasks of the model changeover process were modeled in Microsoft Project as a template to provide a planning, tracking, and management tool. The template can also be used as a tool to evaluate improvements to this process. This document describes the development of the template and provides step-by-step instructions on its use and as a planning and tracking tool. A secondary role of this document is to provide an overview of the model changeover process and briefly describe the tasks associated with it.
A unified engineering model of the first stroke in downward negative lightning
NASA Astrophysics Data System (ADS)
Nag, Amitabh; Rakov, Vladimir A.
2016-03-01
Each stroke in a negative cloud-to-ground lightning flash is composed of downward leader and upward return stroke processes, which are usually modeled individually. The first stroke leader is stepped and starts with preliminary breakdown (PB) which is often viewed as a separate process. We present the first unified engineering model for computing the electric field produced by a sequence of PB, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively charged channel extends downward in a stepped fashion during both the PB and leader stages. Each step involves a current wave that propagates upward along the newly formed channel section. Once the leader attaches to ground, an upward propagating return stroke neutralizes the charge deposited along the channel. Model-predicted electric fields are in reasonably good agreement with simultaneous measurements at both near (hundreds of meters, electrostatic field component is dominant) and far (tens of kilometers, radiation field component is dominant) distances from the lightning channel. Relations between the features of computed electric field waveforms and model input parameters are examined. It appears that peak currents associated with PB pulses are similar to return stroke peak currents, and the observed variation of electric radiation field peaks produced by leader steps at different heights above ground is influenced by the ground corona space charge.
Generalized Models for Rock Joint Surface Shapes
Du, Shigui; Hu, Yunjin; Hu, Xiaofei
2014-01-01
Generalized models of joint surface shapes are the foundation for mechanism studies on the mechanical effects of rock joint surface shapes. Based on extensive field investigations of rock joint surface shapes, generalized models for three level shapes named macroscopic outline, surface undulating shape, and microcosmic roughness were established through statistical analyses of 20,078 rock joint surface profiles. The relative amplitude of profile curves was used as a borderline for the division of different level shapes. The study results show that the macroscopic outline has three basic features such as planar, arc-shaped, and stepped; the surface undulating shape has three basic features such as planar, undulating, and stepped; and the microcosmic roughness has two basic features such as smooth and rough. PMID:25152901
Automatic Texture Reconstruction of 3d City Model from Oblique Images
NASA Astrophysics Data System (ADS)
Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang
2016-06-01
In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.
Coastal Algorithms and On-Demand Processing- The Lessons Learnt from CoastColour for Sentinel 3
NASA Astrophysics Data System (ADS)
Brockmann, Carsten; Doerffer, Roland; Boettcher, Martin; Kramer, Uwe; Zuhlke, Marco; Pinnock, Simon
2015-12-01
The ESA DUE CoastColour Project has been initiated to provide water quality products for important costal zones globally. A new 5 component bio-optical model was developed and used in a 3-step approach for regional processing of ocean colour data. The L1P step consists of radiometric and geometric system corrections, and top-of-atmosphere pixel classification including cloud screening, sun glint risk masking or detection of floating vegetation. The second step includes the atmospheric correction and is providing the L2R product, which comprises marine reflectances with error characterisation and normalisation. The third step is the in-water processing which produces IOPs, attenuation coefficient and water constituent concentrations. Each of these steps will benefit from the additional bands on OLCI. The 5 component bio-optical model will already be used in the standard ESA processing of OLCI, and also part of the pixel classification methods will be part of the standard products. Other algorithm adaptation are in preparation. Another important advantage of the CoastColour approach is the highly configurable processing chain which allows adaptation to the individual characteristics of the area of interest, temporal window, algorithm parametrisation and processing chain configuration. This flexibility is made available to data users through the CoastColour on-demand processing service. The complete global MERIS Full and Reduced Resolution data archive is accessible, covering the time range from 17. May 2002 until 08. April 2012, which is almost 200TB of in-put data available online. The CoastColour on-demand processing service can serve as a model for hosted processing, where the software is moved to the data instead of moving the data to the users, which will be a challenge with the large amount of data coming from Sentinel 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melintescu, A.; Galeriu, D.; Diabate, S.
2015-03-15
The processes involved in tritium transfer in crops are complex and regulated by many feedback mechanisms. A full mechanistic model is difficult to develop due to the complexity of the processes involved in tritium transfer and environmental conditions. First, a review of existing models (ORYZA2000, CROPTRIT and WOFOST) presenting their features and limits, is made. Secondly, the preparatory steps for a robust model are discussed, considering the role of dry matter and photosynthesis contribution to the OBT (Organically Bound Tritium) dynamics in crops.
Paszko, Tadeusz; Jankowska, Monika
2018-06-18
Laboratory adsorption and degradation studies were carried out to determine the effect of time-dependent adsorption on propiconazole degradation rates in samples from three Polish Luvisols. Strong propiconazole adsorption (organic carbon normalized adsorption coefficients K oc in the range of 1217-7777 mL/g) was observed in batch experiments, with a typical biphasic mechanism with a fast initial step followed by the time-dependent step, which finished within 48 h in the majority of soils. The time-dependent step observed in incubation experiments was longer (duration from 5 to 23 d), and its contribution to total adsorption was from 20% to 34%. The half-lives obtained at 25 °C and 40% maximum water holding capacity of soil, were in the range of 34.7-112.9 d in the Ap horizon and in the range of 42.3-448.8 d for subsoils. The very strong correlations, between degradation rates in pore water and soil organic carbon and soil microbial activity, indicated that microbial degradation of propiconazole was most likely the only significant process responsible for the decay of this compound under aerobic conditions for the whole of the examined soil profiles. Modeling of the processes showed that only models coupling adsorption and degradation were able to correctly describe the experimental data. The analysis of the bioavailability factor values showed that degradation was not limited by the rate of propiconazole desorption from soil, but sorption affected the degradation rate by decreasing its availability for microorganisms. Copyright © 2018. Published by Elsevier Inc.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
A Data Stream Model For Runoff Simulation In A Changing Environment
NASA Astrophysics Data System (ADS)
Yang, Q.; Shao, J.; Zhang, H.; Wang, G.
2017-12-01
Runoff simulation is of great significance for water engineering design, water disaster control, water resources planning and management in a catchment or region. A large number of methods including concept-based process-driven models and statistic-based data-driven models, have been proposed and widely used in worldwide during past decades. Most existing models assume that the relationship among runoff and its impacting factors is stationary. However, in the changing environment (e.g., climate change, human disturbance), their relationship usually evolves over time. In this study, we propose a data stream model for runoff simulation in a changing environment. Specifically, the proposed model works in three steps: learning a rule set, expansion of a rule, and simulation. The first step is to initialize a rule set. When a new observation arrives, the model will check which rule covers it and then use the rule for simulation. Meanwhile, Page-Hinckley (PH) change detection test is used to monitor the online simulation error of each rule. If a change is detected, the corresponding rule is removed from the rule set. In the second step, for each rule, if it covers more than a given number of instance, the rule is expected to expand. In the third step, a simulation model of each leaf node is learnt with a perceptron without activation function, and is updated with adding a newly incoming observation. Taking Fuxi River catchment as a case study, we applied the model to simulate the monthly runoff in the catchment. Results show that abrupt change is detected in the year of 1997 by using the Page-Hinckley change detection test method, which is consistent with the historic record of flooding. In addition, the model achieves good simulation results with the RMSE of 13.326, and outperforms many established methods. The findings demonstrated that the proposed data stream model provides a promising way to simulate runoff in a changing environment.
NASA Astrophysics Data System (ADS)
Janardhanan, Vinod M.; Deutschmann, Olaf
Direct internal reforming in solid oxide fuel cell (SOFC) results in increased overall efficiency of the system. Present study focus on the chemical and electrochemical process in an internally reforming anode supported SOFC button cell running on humidified CH 4 (3% H 2 O). The computational approach employs a detailed multi-step model for heterogeneous chemistry in the anode, modified Butler-Volmer formalism for the electrochemistry and Dusty Gas Model (DGM) for the porous media transport. Two-dimensional elliptic model equations are solved for a button cell configuration. The electrochemical model assumes hydrogen as the only electrochemically active species. The predicted cell performances are compared with experimental reports. The results show that model predictions are in good agreement with experimental observation except the open circuit potentials. Furthermore, the steam content in the anode feed stream is found to have remarkable effect on the resulting overpotential losses and surface coverages of various species at the three-phase boundary.
NASA Astrophysics Data System (ADS)
Mao, Y.; Crow, W. T.; Nijssen, B.
2017-12-01
Soil moisture (SM) plays an important role in runoff generation both by partitioning infiltration and surface runoff during rainfall events and by controlling the rate of subsurface flow during inter-storm periods. Therefore, more accurate SM state estimation in hydrologic models is potentially beneficial for streamflow prediction. Various previous studies have explored the potential of assimilating SM data into hydrologic models for streamflow improvement. These studies have drawn inconsistent conclusions, ranging from significantly improved runoff via SM data assimilation (DA) to limited or degraded runoff. These studies commonly treat the whole assimilation procedure as a black box without separating the contribution of each step in the procedure, making it difficult to attribute the underlying causes of runoff improvement (or the lack thereof). In this study, we decompose the overall DA process into three steps by answering the following questions (3-step framework): 1) how much can assimilation of surface SM measurements improve surface SM state in a hydrologic model? 2) how much does surface SM improvement propagate to deeper layers? 3) How much does (surface and deeper-layer) SM improvement propagate into runoff improvement? A synthetic twin experiment is carried out in the Arkansas-Red River basin ( 600,000 km2) where a synthetic "truth" run, an open-loop run (without DA) and a DA run (where synthetic surface SM measurements are assimilated) are generated. All model runs are performed at 1/8 degree resolution and over a 10-year period using the Variable Infiltration Capacity (VIC) hydrologic model at a 3-hourly time step. For the DA run, the ensemble Kalman filter (EnKF) method is applied. The updated surface and deeper-layer SM states with DA are compared to the open-loop SM to quantitatively evaluate the first two steps in the framework. To quantify the third step, a set of perfect-state runs are generated where the "true" SM states are directly inserted in the model to assess the maximum possible runoff improvement that can be achieved by improving SM states alone. Our results show that the 3-step framework is able to effectively identify the potential as well as bottleneck of runoff improvement and point out the cases where runoff improvement via assimilation of surface SM is prone to failure.
Schaefer, C; Lecomte, C; Clicq, D; Merschaert, A; Norrant, E; Fotiadu, F
2013-09-01
The final step of an active pharmaceutical ingredient (API) manufacturing synthesis process consists of a crystallization during which the API and residual solvent contents have to be quantified precisely in order to reach a predefined seeding point. A feasibility study was conducted to demonstrate the suitability of on-line NIR spectroscopy to control this step in line with new version of the European Medicines Agency (EMA) guideline [1]. A quantitative method was developed at laboratory scale using statistical design of experiments (DOE) and multivariate data analysis such as principal component analysis (PCA) and partial least squares (PLS) regression. NIR models were built to quantify the API in the range of 9-12% (w/w) and to quantify the residual methanol in the range of 0-3% (w/w). To improve the predictive ability of the models, the development procedure encompassed: outliers elimination, optimum model rank definition, spectral range and spectral pre-treatment selection. Conventional criteria such as, number of PLS factors, R(2), root mean square errors of calibration, cross-validation and prediction (RMSEC, RMSECV, RMSEP) enabled the selection of three model candidates. These models were tested in the industrial pilot plant during three technical campaigns. Results of the most suitable models were evaluated against to the chromatographic reference methods. Maximum relative bias of 2.88% was obtained about API target content. Absolute bias of 0.01 and 0.02% (w/w) respectively were achieved at methanol content levels of 0.10 and 0.13% (w/w). The repeatability was assessed as sufficient for the on-line monitoring of the 2 analytes. The present feasibility study confirmed the possibility to use on-line NIR spectroscopy as a PAT tool to monitor in real-time both the API and the residual methanol contents, in order to control the seeding of an API crystallization at industrial scale. Furthermore, the successful scale-up of the method proved its capability to be implemented in the manufacturing plant with the launch of the new API process. Copyright © 2013 Elsevier B.V. All rights reserved.
Ríos, Sergio D; Castañeda, Joandiet; Torras, Carles; Farriol, Xavier; Salvadó, Joan
2013-04-01
Microalgae can grow rapidly and capture CO2 from the atmosphere to convert it into complex organic molecules such as lipids (biodiesel feedstock). High scale economically feasible microalgae based oil depends on optimizing the entire process production. This process can be divided in three very different but directly related steps (production, concentration, lipid extraction and transesterification). The aim of this study is to identify the best method of lipid extraction to undergo the potentiality of some microalgal biomass obtained from two different harvesting paths. The first path used all physicals concentration steps, and the second path was a combination of chemical and physical concentration steps. Three microalgae species were tested: Phaeodactylum tricornutum, Nannochloropsis gaditana, and Chaetoceros calcitrans One step lipid extraction-transesterification reached the same fatty acid methyl ester yield as the Bligh and Dyer and soxhlet extraction with n-hexane methods with the corresponding time, cost and solvent saving. Copyright © 2013 Elsevier Ltd. All rights reserved.
Southern Regional Center for Lightweight Innovative Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horstemeyer, Mark F.; Wang, Paul
The three major objectives of this Phase III project are: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantify microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios.
Supercritical Fluid Spray Application Process for Adhesives and Primers
2003-03-01
The basic scheme of SFE process consists of three steps. A solvent, typically carbon dioxide, first is heated and pressurized to a supercritical...passivation step to remove contaminants and to prevent recontamination. Bok et al. (25) describe a pressure pulsation mechanism to stimulate improved...in as a liquid, and then it is heated to above its critical temperature to become a supercritical fluid. The sample is injected and dissolved into
Sanchez, Jason C; Toal, Sarah J; Wang, Zheng; Dugan, Regina E; Trogler, William C
2007-11-01
Detection of trace quantities of explosive residues plays a key role in military, civilian, and counter-terrorism applications. To advance explosives sensor technology, current methods will need to become cheaper and portable while maintaining sensitivity and selectivity. The detection of common explosives including trinitrotoluene (TNT), cyclotrimethylenetrinitramine, cyclotetramethylene-tetranitramine, pentaerythritol tetranitrate, 2,4,6-trinitrophenyl-N-methylnitramine, and trinitroglycerin may be carried out using a three-step process combining "turn-off" and "turn-on" fluorimetric sensing. This process first detects nitroaromatic explosives by their quenching of green luminescence of polymetalloles (lambda em approximately 400-510 nm). The second step places down a thin film of 2,3-diaminonaphthalene (DAN) while "erasing" the polymetallole luminescence. The final step completes the reaction of the nitramines and/or nitrate esters with DAN resulting in the formation of a blue luminescent traizole complex (lambda(em) = 450 nm) providing a "turn-on" response for nitramine and nitrate ester-based explosives. Detection limits as low as 2 ng are observed. Solid-state detection of production line explosives demonstrates the applicability of this method to real world situations. This method offers a sensitive and selective detection process for a diverse group of the most common high explosives used in military and terrorist applications today.
Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.
Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik
2015-12-01
The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.
Developing a Competency-Based Curriculum for a Dental Hygiene Program.
ERIC Educational Resources Information Center
DeWald, Janice P.; McCann, Ann L.
1999-01-01
Describes the three-step process used to develop a competency-based curriculum at the Caruth School of Dental Hygiene (Texas A&M University). The process involved development of a competency document (detailing three domains, nine major competencies, and 54 supporting competencies), an evaluation plan, and a curriculum inventory which defined…
Zhang, Bin; Seong, Baekhoon; Lee, Jaehyun; Nguyen, VuDat; Cho, Daehyun; Byun, Doyoung
2017-09-06
A one-step sub-micrometer-scale electrohydrodynamic (EHD) inkjet three-dimensional (3D)-printing technique that is based on the drop-on-demand (DOD) operation for which an additional postsintering process is not required is proposed. Both the numerical simulation and the experimental observations proved that nanoscale Joule heating occurs at the interface between the charged silver nanoparticles (Ag-NPs) because of the high electrical contact resistance during the printing process; this is the reason why an additional postsintering process is not required. Sub-micrometer-scale 3D structures were printed with an above-35 aspect ratio via the use of the proposed printing technique; furthermore, it is evident that the designed 3D structures such as a bridge-like shape can be printed with the use of the proposed printing technique, allowing for the cost-effective fabrication of a 3D touch sensor and an ultrasensitive air flow-rate sensor. It is believed that the proposed one-step printing technique may replace the conventional 3D conductive-structure printing techniques for which a postsintering process is used because of its economic efficiency.
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
Using Resin-Based 3D Printing to Build Geometrically Accurate Proxies of Porous Sedimentary Rocks.
Ishutov, Sergey; Hasiuk, Franciszek J; Jobe, Dawn; Agar, Susan
2018-05-01
Three-dimensional (3D) printing is capable of transforming intricate digital models into tangible objects, allowing geoscientists to replicate the geometry of 3D pore networks of sedimentary rocks. We provide a refined method for building scalable pore-network models ("proxies") using stereolithography 3D printing that can be used in repeated flow experiments (e.g., core flooding, permeametry, porosimetry). Typically, this workflow involves two steps, model design and 3D printing. In this study, we explore how the addition of post-processing and validation can reduce uncertainty in the 3D-printed proxy accuracy (difference of proxy geometry from the digital model). Post-processing is a multi-step cleaning of porous proxies involving pressurized ethanol flushing and oven drying. Proxies are validated by: (1) helium porosimetry and (2) digital measurements of porosity from thin-section images of 3D-printed proxies. 3D printer resolution was determined by measuring the smallest open channel in 3D-printed "gap test" wafers. This resolution (400 µm) was insufficient to build porosity of Fontainebleau sandstone (∼13%) from computed tomography data at the sample's natural scale, so proxies were printed at 15-, 23-, and 30-fold magnifications to validate the workflow. Helium porosities of the 3D-printed proxies differed from digital calculations by up to 7% points. Results improved after pressurized flushing with ethanol (e.g., porosity difference reduced to ∼1% point), though uncertainties remain regarding the nature of sub-micron "artifact" pores imparted by the 3D printing process. This study shows the benefits of including post-processing and validation in any workflow to produce porous rock proxies. © 2017, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Valle-Hernández, Julio; Romero-Paredes, Hernando; Arancibia-Bulnes, Camilo A.; Villafan-Vidales, Heidi I.; Espinosa-Paredes, Gilberto
2016-05-01
In this paper the simulation of the thermal reduction for hydrogen production through the decomposition of cerium oxide is presented. The thermochemical cycle for hydrogen production consists of the endothermic reduction of CeO2 at high temperature, where concentrated solar energy is used as a source of heat; and of the subsequent steam hydrolysis of the resulting cerium oxide to produce hydrogen. For the thermochemical process, a solar reactor prototype is proposed; consisting of a cubic receptacle made of graphite fiber thermally insulated. Inside the reactor a pyramidal arrangement with nine tungsten pipes is housed. The pyramidal arrangement is made respect to the focal point where the reflected energy is concentrated. The solar energy is concentrated through the solar furnace of high radiative flux. The endothermic step is the reduction of the cerium oxide to lower-valence cerium oxide, at very high temperature. The exothermic step is the hydrolysis of the cerium oxide (III) to form H2 and the corresponding initial cerium oxide made at lower temperature inside the solar reactor. For the modeling, three sections of the pipe where the reaction occurs were considered; the carrier gas inlet, the porous medium and the reaction products outlet. The mathematical model describes the fluid mechanics; mass and energy transfer occurring therein inside the tungsten pipe. Thermochemical process model was simulated in CFD. The results show a temperature distribution in the solar reaction pipe and allow obtaining the fluid dynamics and the heat transfer within the pipe. This work is part of the project "Solar Fuels and Industrial Processes" from the Mexican Center for Innovation in Solar Energy (CEMIE-Sol).
ERIC Educational Resources Information Center
Nussli, Natalie; Oh, Kevin
2014-01-01
The overarching question that guides this review is to identify the key components of effective teacher training in virtual schooling, with a focus on three-dimensional (3D) immersive virtual worlds (IVWs). The process of identifying the essential components of effective teacher training in the use of 3D IVWs will be described step-by-step. First,…
Comparing feed-forward versus neural gas as estimators: application to coke wastewater treatment.
Machón-González, Iván; López-García, Hilario; Rodríguez-Iglesias, Jesús; Marañón-Maison, Elena; Castrillón-Peláez, Leonor; Fernández-Nava, Yolanda
2013-01-01
Numerous papers related to the estimation of wastewater parameters have used artificial neural networks. Although successful results have been reported, different problems have arisen such as overtraining, local minima and model instability. In this paper, two types of neural networks, feed-forward and neural gas, are trained to obtain a model that estimates the values of nitrates in the effluent stream of a three-step activated sludge system (two oxic and one anoxic). Placing the denitrification (anoxic) step at the head of the process can force denitrifying bacteria to use internal organic carbon. However, methanol has to be added to achieve high denitrification efficiencies in some industrial wastewaters. The aim of this paper is to compare the two networks in addition to suggesting a methodology to validate the models. The influence of the neighbourhood radius is important in the neural gas approach and must be selected correctly. Neural gas performs well due to its cooperation--competition procedure, with no problems of stability or overfitting arising in the experimental results. The neural gas model is also interesting for use as a direct plant model because of its robustness and deterministic behaviour.
Non-equilibrium processes in p + Ag collisions at GeV energies
NASA Astrophysics Data System (ADS)
Fidelus, M.; Filges, D.; Goldenbaum, F.; Jarczyk, L.; Kamys, B.; Kistryn, M.; Kistryn, St.; Kozik, E.; Kulessa, P.; Machner, H.; Magiera, A.; Piskor-Ignatowicz, B.; Pysz, K.; Rudy, Z.; Sharma, Sushil K.; Siudak, R.; Wojciechowski, M.; PISA Collaboration
2017-12-01
The double differential spectra d2σ /d Ω d E of p , d , t , 3,4,6He, 6,7,8,9Li, 7,9,10Be, and 10,11,12B were measured at seven scattering angles, 15.6∘, 20∘, 35∘, 50∘, 65∘, 80∘, and 100∘, in the laboratory system for proton induced reactions on a silver target. Measurements were done for three proton energies: 1.2, 1.9, and 2.5 GeV. The experimental data were compared to calculations performed by means of two-step theoretical microscopic models. The first step of the reaction was described by the intranuclear cascade model incl4.6 and the second one by four different models (ABLA07, GEM2, gemini++, and SMM) using their standard parameter settings. Systematic deviations of the data from predictions of the models were observed. The deviations were especially large for the forward scattering angles and for the kinetic energy of emitted particles in the range from about 50 to 150 MeV. This suggests that some important non-equilibrium mechanism is lacking in the present day microscopic models of proton-nucleus collisions in the studied beam energy range.
Broznić, Dalibor; Milin, Čedomila
2016-01-01
Summary The antioxidant activity of three types of pumpkin seed oil or oil mixtures (cold- -pressed, produced from roasted seed paste and salad) produced in the northern part of Croatia and the kinetics of their behaviour as free radical scavengers were investigated using DPPH˙. In addition, the involvement of oil tocopherol isomers (α-, γ- and δ-) in different steps of DPPH˙ disappearance and their impact on the rate of reaction were analysed. The kinetics of DPPH˙ disappearance is a two-step process. In the first step, rapid disappearance of DPPH˙ occurs during the first 11 min of the reaction, depending on the oil type, followed by a slower decline in the second step. To describe DPPH˙ disappearance kinetics, six mathematical models (mono- and biphasic) were tested. Our findings showed that γ- and δ-tocopherols affected DPPH˙ disappearance during the first step, and α-tocopherol in the second step of the reaction. Moreover, α-tocopherol demonstrated 30 times higher antioxidant activity than γ- and δ-tocopherols. The results indicated the biphasic double-exponential behaviour of DPPH˙ disappearance in oil samples, due to the complexity of reactions that involve different tocopherol isomers and proceed through different chemical pathways. PMID:27904410
Broznić, Dalibor; Jurešić, Gordana Čanadi; Milin, Čedomila
2016-06-01
The antioxidant activity of three types of pumpkin seed oil or oil mixtures (cold- -pressed, produced from roasted seed paste and salad) produced in the northern part of Croatia and the kinetics of their behaviour as free radical scavengers were investigated using DPPH˙. In addition, the involvement of oil tocopherol isomers (α-, γ- and δ-) in different steps of DPPH˙ disappearance and their impact on the rate of reaction were analysed. The kinetics of DPPH˙ disappearance is a two-step process. In the first step, rapid disappearance of DPPH˙ occurs during the first 11 min of the reaction, depending on the oil type, followed by a slower decline in the second step. To describe DPPH˙ disappearance kinetics, six mathematical models (mono- and biphasic) were tested. Our findings showed that γ- and δ-tocopherols affected DPPH˙ disappearance during the first step, and α-tocopherol in the second step of the reaction. Moreover, α-tocopherol demonstrated 30 times higher antioxidant activity than γ- and δ-tocopherols. The results indicated the biphasic double-exponential behaviour of DPPH˙ disappearance in oil samples, due to the complexity of reactions that involve different tocopherol isomers and proceed through different chemical pathways.
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
NASA Astrophysics Data System (ADS)
Sathyaseelan, V. S.; Rufus, A. L.; Chandramohan, P.; Subramanian, H.; Velmurugan, S.
2015-12-01
Full system decontamination of Primary Heat Transport (PHT) system of Pressurised Heavy Water Reactors (PHWRs) resulted in low decontamination factors (DF) on stainless steel (SS) surfaces. Hence, studies were carried out with 403 SS and 410 SS that are the material of construction of "End-Fitting body" and "End-Fitting Liner tubes". Three formulations were evaluated for the dissolution of passive films formed over these alloys viz., i) Two-step process consisting of oxidation and reduction reactions, ii) Dilute Chemical Decontamination (DCD) and iii) High Temperature Process. The two-step and high temperature processes could dissolve the oxide completely while the DCD process could remove only 60%. Various techniques like XRD, Raman spectroscopy and SEM-EDX were used for assessing the dissolution process. The two-step process is time consuming, laborious while the high temperature process is less time consuming and is recommended for SS decontamination.
Chappell, Stacie; Pescud, Melanie; Waterworth, Pippa; Shilton, Trevor; Roche, Dee; Ledger, Melissa; Slevin, Terry; Rosenberg, Michael
2016-10-01
The aim of this study was to use Kotter's leading change model to explore the implementation of workplace health and wellbeing initiatives. Qualitative interviews were conducted with 31 workplace representatives with a healthy workplace initiative. None of the workplaces used a formal change management model when implementing their healthy workplace initiatives. Not all of the steps in Kotter model were considered necessary and the order of the steps was challenged. For example, interviewees perceived that communicating the vision, developing the vision, and creating a guiding coalition were integral parts of the process, although there was less emphasis on the importance of creating a sense of urgency and consolidating change. Although none of the workplaces reported using a formal organizational change model when implementing their healthy workplace initiatives, there did appear to be perceived merit in using the steps in Kotter's model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saxena, Vikrant, E-mail: vikrant.saxena@desy.de; Hamburg Center for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg; Ziaja, Beata, E-mail: ziaja@mail.desy.de
The irradiation of an atomic cluster with a femtosecond x-ray free-electron laser pulse results in a nanoplasma formation. This typically occurs within a few hundred femtoseconds. By this time the x-ray pulse is over, and the direct photoinduced processes no longer contributing. All created electrons within the nanoplasma are thermalized. The nanoplasma thus formed is a mixture of atoms, electrons, and ions of various charges. While expanding, it is undergoing electron impact ionization and three-body recombination. Below we present a hydrodynamic model to describe the dynamics of such multi-component nanoplasmas. The model equations are derived by taking the moments ofmore » the corresponding Boltzmann kinetic equations. We include the equations obtained, together with the source terms due to electron impact ionization and three-body recombination, in our hydrodynamic solver. Model predictions for a test case, expanding spherical Ar nanoplasma, are obtained. With this model, we complete the two-step approach to simulate x-ray created nanoplasmas, enabling computationally efficient simulations of their picosecond dynamics. Moreover, the hydrodynamic framework including collisional processes can be easily extended for other source terms and then applied to follow relaxation of any finite non-isothermal multi-component nanoplasma with its components relaxed into local thermodynamic equilibrium.« less
Modelling soil-water dynamics in the rootzone of structured and water-repellent soils
NASA Astrophysics Data System (ADS)
Brown, Hamish; Carrick, Sam; Müller, Karin; Thomas, Steve; Sharp, Joanna; Cichota, Rogerio; Holzworth, Dean; Clothier, Brent
2018-04-01
In modelling the hydrology of Earth's critical zone, there are two major challenges. The first is to understand and model the processes of infiltration, runoff, redistribution and root-water uptake in structured soils that exhibit preferential flows through macropore networks. The other challenge is to parametrise and model the impact of ephemeral hydrophobicity of water-repellent soils. Here we have developed a soil-water model, which is based on physical principles, yet possesses simple functionality to enable easier parameterisation, so as to predict soil-water dynamics in structured soils displaying time-varying degrees of hydrophobicity. Our model, WEIRDO (Water Evapotranspiration Infiltration Redistribution Drainage runOff), has been developed in the APSIM Next Generation platform (Agricultural Production Systems sIMulation). The model operates on an hourly time-step. The repository for this open-source code is https://github.com/APSIMInitiative/ApsimX. We have carried out sensitivity tests to show how WEIRDO predicts infiltration, drainage, redistribution, transpiration and soil-water evaporation for three distinctly different soil textures displaying differing hydraulic properties. These three soils were drawn from the UNSODA (Unsaturated SOil hydraulic Database) soils database of the United States Department of Agriculture (USDA). We show how preferential flow process and hydrophobicity determine the spatio-temporal pattern of soil-water dynamics. Finally, we have validated WEIRDO by comparing its predictions against three years of soil-water content measurements made under an irrigated alfalfa (Medicago sativa L.) trial. The results provide validation of the model's ability to simulate soil-water dynamics in structured soils.
Graphics-Based Parallel Programming Tools
1991-09-01
mean "beyond" (as in " paranormal "). emphasizing the fact that the editor supports the specification of not just single graphs, but entire graph...conflicting dependencies: all processes see the three steps in the same order and all interprocess communication happens within a step. 6 Not all abstract
NASA Astrophysics Data System (ADS)
Harkrider, Curtis Jason
2000-08-01
The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.
Wang, Liang; Zhu, Jian; Samady, Habib; Monoly, David; Zheng, Jie; Guo, Xiaoya; Maehara, Akiko; Yang, Chun; Ma, Genshan; Mintz, Gary S.; Tang, Dalin
2017-01-01
Accurate stress and strain calculations are important for plaque progression and vulnerability assessment. Models based on in vivo data often need to form geometries with zero-stress/strain conditions. The goal of this paper is to use IVUS-based near-idealized geometries and introduce a three-step model construction process to include residual stress, axial shrinkage, and circumferential shrinkage and investigate their impacts on stress and strain calculations. In Vivo intravascular ultrasound (IVUS) data of human coronary were acquired for model construction. In Vivo IVUS movie data were acquired and used to determine patient-specific material parameter values. A three-step modeling procedure was used to make our model: (a) wrap the zero-stress vessel sector to obtain the residual stress; (b) stretch the vessel axially to its length in vivo; and (c) pressurize the vessel to recover its in vivo geometry. Eight models were constructed for our investigation. Wrapping led to reduced lumen and cap stress and increased out boundary stress. The model with axial stretch, circumferential shrink, but no wrapping overestimated lumen and cap stress by 182% and 448%, respectively. The model with wrapping, circumferential shrink, but no axial stretch predicted average lumen stress and cap stress as 0.76 kPa and −15 kPa. The same model with 10% axial stretch had 42.53 kPa lumen stress and 29.0 kPa cap stress, respectively. Skipping circumferential shrinkage leads to overexpansion of the vessel and incorrect stress/strain calculations. Vessel stiffness increase (100%) leads to 75% lumen stress increase and 102% cap stress increase. PMID:27814429
Wang, Shaoying; Ji, Zhouxiang; Yan, Erfu; Haque, Farzin; Guo, Peixuan
2016-01-01
The DNA packaging motor of dsDNA bacterial viruses contains a head-tail connector with a channel for genome to enter during assembly and to exit during host infection. The DNA packaging motor of bacterial virus phi29 was recently reported to use the “One-way Revolution” mechanism for DNA packaging. This raises a question of how dsDNA is ejected during infection if the channel acts as a one-way inward valve. Here we report a three step conformational change of the portal channel that is common among DNA translocation motors of bacterial viruses T3, T4, SPP1, and phi29. The channels of these motors exercise three discrete steps of gating, as revealed by electrophysiological assays. It is proposed that the three step channel conformational changes occur during DNA entry process, resulting in a structural transition in preparation of DNA movement in the reverse direction during ejection. PMID:27181501
Modelling to very high strains
NASA Astrophysics Data System (ADS)
Bons, P. D.; Jessell, M. W.; Griera, A.; Evans, L. A.; Wilson, C. J. L.
2009-04-01
Ductile strains in shear zones often reach extreme values, resulting in typical structures, such as winged porphyroclasts and several types of shear bands. The numerical simulation of the development of such structures has so far been inhibited by the low maximum strains that numerical models can normally achieve. Typical numerical models collapse at shear strains in the order of one to three. We have implemented a number of new functionalities in the numerical platform "Elle" (Jessell et al. 2001), which significantly increases the amount of strain that can be achieved and simultaneously reduces boundary effects that become increasingly disturbing at higher strain. Constant remeshing, while maintaining the polygonal phase regions, is the first step to avoid collapse of the finite-element grid required by finite-element solvers, such as Basil (Houseman et al. 2008). The second step is to apply a grain-growth routine to the boundaries of polygons that represent phase regions. This way, the development of sharp angles is avoided. A second advantage is that phase regions may merge or become separated (boudinage). Such topological changes are normally not possible in finite element deformation codes. The third step is the use of wrapping vertical model boundaries, with which optimal and unchanging model boundaries are maintained for the application of stress or velocity boundary conditions. The fourth step is to shift the model by a random amount in the vertical direction every time step. This way, the fixed horizontal boundary conditions are applied to different material points within the model every time step. Disturbing boundary effects are thus averaged out over the whole model and not localised to e.g. top and bottom of the model. Reduction of boundary effects has the additional advantage that model can be smaller and, therefore, numerically more efficient. Owing to the combination of these existing and new functionalities it is now possible to simulate the development of very high-strain structures. Jessell, M.W., Bons, P.D., Evans, L., Barr, T., Stüwe, K. 2001. Elle: a micro-process approach to the simulation of microstructures. Computers & Geosciences 27, 17-30. Houseman, G., Barr, T., Evans, L. 2008. Basil: stress and deformation in a viscous material. In: P.D. Bons, D. Koehn & M.W.Jessell (Eds.) Microdynamics Simulation. Lecture Notes in Earth Sciences 106, Springer, Berlin, 405p.
Rhodes, Scott D; Mann-Jackson, Lilli; Alonzo, Jorge; Simán, Florence M; Vissman, Aaron T; Nall, Jennifer; Abraham, Claire; Aronson, Robert E; Tanner, Amanda E
2017-12-01
The science underlying the development of individual, community, system, and policy interventions designed to reduce health disparities has lagged behind other innovations. Few models, theoretical frameworks, or processes exist to guide intervention development. Our community-engaged research partnership has been developing, implementing, and evaluating efficacious interventions to reduce HIV disparities for over 15 years. Based on our intervention research experiences, we propose a novel 13-step process designed to demystify and guide intervention development. Our intervention development process includes steps such as establishing an intervention team to manage the details of intervention development; assessing community needs, priorities, and assets; generating intervention priorities; evaluating and incorporating theory; developing a conceptual or logic model; crafting activities; honing materials; administering a pilot, noting its process, and gathering feedback from all those involved; and editing the intervention based on what was learned. Here, we outline and describe each of these 13 steps.
Ye, Jianchu; Tu, Song; Sha, Yong
2010-10-01
For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.
Melanin fluorescence spectra by step-wise three photon excitation
NASA Astrophysics Data System (ADS)
Lai, Zhenhua; Kerimo, Josef; DiMarzio, Charles A.
2012-03-01
Melanin is the characteristic chromophore of human skin with various potential biological functions. Kerimo discovered enhanced melanin fluorescence by stepwise three-photon excitation in 2011. In this article, step-wise three-photon excited fluorescence (STPEF) spectrum between 450 nm -700 nm of melanin is reported. The melanin STPEF spectrum exhibited an exponential increase with wavelength. However, there was a probability of about 33% that another kind of step-wise multi-photon excited fluorescence (SMPEF) that peaks at 525 nm, shown by previous research, could also be generated using the same process. Using an excitation source at 920 nm as opposed to 830 nm increased the potential for generating SMPEF peaks at 525 nm. The SMPEF spectrum peaks at 525 nm photo-bleached faster than STPEF spectrum.
NASA Astrophysics Data System (ADS)
Majstorovic, J.; Rosat, S.; Lambotte, S.; Rogister, Y. J. G.
2017-12-01
Although there are numerous studies about 3D density Earth model, building an accurate one is still an engaging challenge. One procedure to refine global 3D Earth density models is based on unambiguous measurements of Earth's normal mode eigenfrequencies. To have unbiased eigenfrequency measurements one needs to deal with a variety of time records quality and especially different noise sources, while standard approaches usually include signal processing methods such as Fourier transform. Here we present estimate of complex eigenfrequencies and structure coefficients for several modes below 1 mHz (0S2, 2S1, etc.). Our analysis is performed in three steps. The first step includes the use of stacking methods to enhance specific modes of interest above the observed noise level. Out of three trials the optimal sequence estimation turned out to be the foremost compared to the spherical harmonic stacking method and receiver strip method. In the second step we apply an autoregressive method in the frequency domain to estimate complex eigenfrequencies of target modes. In the third step we apply the phasor walkout method to test and confirm our eigenfrequencies. Before conducting an analysis of time records, we evaluate how the station distribution and noise levels impact the estimate of eigenfrequencies and structure coefficients by using synthetic seismograms calculated for a 3D realistic Earth model, which includes Earth's ellipticity and lateral heterogeneity. Synthetic seismograms are computed by means of normal mode summation using self-coupling and cross-coupling of modes up to 1 mHz. Eventually, the methods tested on synthetic data are applied to long-period seismometer and superconducting gravimeter data recorded after six mega-earthquakes of magnitude greater than 8.3. Hence, we propose new estimates of structure coefficients dependent on the density variations.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
Smejkal, Benjamin; Agrawal, Neeraj J; Helk, Bernhard; Schulz, Henk; Giffard, Marion; Mechelke, Matthias; Ortner, Franziska; Heckmeier, Philipp; Trout, Bernhardt L; Hekmat, Dariusch
2013-09-01
The potential of process crystallization for purification of a therapeutic monoclonal IgG1 antibody was studied. The purified antibody was crystallized in non-agitated micro-batch experiments for the first time. A direct crystallization from clarified CHO cell culture harvest was inhibited by high salt concentrations. The salt concentration of the harvest was reduced by a simple pretreatment step. The crystallization process from pretreated harvest was successfully transferred to stirred tanks and scaled-up from the mL-scale to the 1 L-scale for the first time. The crystallization yield after 24 h was 88-90%. A high purity of 98.5% was reached after a single recrystallization step. A 17-fold host cell protein reduction was achieved and DNA content was reduced below the detection limit. High biological activity of the therapeutic antibody was maintained during the crystallization, dissolving, and recrystallization steps. Crystallization was also performed with impure solutions from intermediate steps of a standard monoclonal antibody purification process. It was shown that process crystallization has a strong potential to replace Protein A chromatography. Fast dissolution of the crystals was possible. Furthermore, it was shown that crystallization can be used as a concentrating step and can replace several ultra-/diafiltration steps. Molecular modeling suggested that a negative electrostatic region with interspersed exposed hydrophobic residues on the Fv domain of this antibody is responsible for the high crystallization propensity. As a result, process crystallization, following the identification of highly crystallizable antibodies using molecular modeling tools, can be recognized as an efficient, scalable, fast, and inexpensive alternative to key steps of a standard purification process for therapeutic antibodies. Copyright © 2013 Wiley Periodicals, Inc.
Conceptual analysis of Physiology of vision in Ayurveda
Balakrishnan, Praveen; Ashwini, M. J.
2014-01-01
The process by which the world outside is seen is termed as visual process or physiology of vision. There are three phases in this visual process: phase of refraction of light, phase of conversion of light energy into electrical impulse and finally peripheral and central neurophysiology. With the advent of modern instruments step by step biochemical changes occurring at each level of the visual process has been deciphered. Many investigations have emerged to track these changes and helping to diagnose the exact nature of the disease. Ayurveda has described this physiology of vision based on the functions of vata and pitta. Philosophical textbook of ayurveda, Tarka Sangraha, gives certain basics facts of visual process. This article discusses the second and third phase of visual process. Step by step analysis of the visual process through the spectacles of ayurveda amalgamated with the basics of philosophy from Tarka Sangraha has been analyzed critically to generate a concrete idea regarding the physiology and hence thereby interpret the pathology on the grounds of ayurveda based on the investigative reports. PMID:25336853
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Yunhua; Jones, Susanne B.; Biddy, Mary J.
2012-08-01
This study reports the comparison of biomass gasification based syngas-to-distillate (S2D) systems using techno-economic analysis (TEA). Three cases, state of technology (SOT) case, goal case, and conventional case, were compared in terms of performance and cost. The SOT case and goal case represent technology being developed at Pacific Northwest National Laboratory for a process starting with syngas using a single-step dual-catalyst reactor for distillate generation (S2D process). The conventional case mirrors the two-step S2D process previously utilized and reported by Mobil using natural gas feedstock and consisting of separate syngas-to-methanol and methanol-to-gasoline (MTG) processes. Analysis of the three cases revealedmore » that the goal case could indeed reduce fuel production cost over the conventional case, but that the SOT was still more expensive than the conventional. The SOT case suffers from low one-pass yield and high selectivity to light hydrocarbons, both of which drive up production cost. Sensitivity analysis indicated that light hydrocarbon yield, single pass conversion efficiency, and reactor space velocity are the key factors driving the high cost for the SOT case.« less
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Low-Cost alpha Alane for Hydrogen Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabian, Tibor; Petrie, Mark; Crouch-Baker, Steven
This project was directed towards the further development of the Savannah River National Laboratory (SRNL) lab-scale electrochemical synthesis of the hydrogen storage material alpha-alane and Ardica Technologies-SRI International (SRI) chemical downstream processes that are necessary to meet DoE cost metrics and transition alpha-alane synthesis to an industrial scale. Ardica has demonstrated the use of alpha-alane in a fuel-cell system for the U.S. Army WFC20 20W soldier power system that has successfully passed initial field trials with individual soldiers. While alpha-alane has been clearly identified as a desirable hydrogen storage material, cost-effective means for its production and regeneration on a scalemore » of use applicable to the industry have yet to be established. We focused on three, principal development areas: 1. The construction of a comprehensive engineering techno-economic model to establish the production costs of alpha-alane by both electrochemical and chemical routes at scale. 2. The identification of critical, cost-saving design elements of the electrochemical cell and the quantification of the product yields of the primary electrochemical process. A moving particle-bed reactor design was constructed and operated. 3. The experimental quantification of the product yields of candidate downstream chemical processes necessary to produce alpha-alane to complete the most cost-effective overall manufacturing process. Our techno-economic model shows that under key assumptions most 2015 and 2020 DOE hydrogen storage system cost targets for low and medium power can be achieved using the electrochemical alane synthesis process. To meet the most aggressive 2020 storage system cost target, $1/g, our model indicates that 420 metric tons per year (MT/y) production of alpha-alane is required. Laboratory-scale experimental work demonstrated that the yields of two of the three critical component steps within the overall “electrochemical process” were sufficiently high to meet this production target. In the case of the yield of the third step, the crystallization of alpha-alane from the primary alane-related product of the electrochemical reaction, further development is required.« less
NASA Astrophysics Data System (ADS)
Khan, M. N.; Shamim, T.
2017-08-01
Hydrogen production by using a three reactor chemical looping reforming (TRCLR) technology is an innovative and attractive process. Fossil fuels such as methane are the feedstocks used. This process is similar to a conventional steam-methane reforming but occurs in three steps utilizing an oxygen carrier. As the oxygen carrier plays an important role, its selection should be done carefully. In this study, two oxygen carrier materials of base metal iron (Fe) and tungsten (W) are analysed using a thermodynamic model of a three reactor chemical looping reforming plant in Aspen plus. The results indicate that iron oxide has moderate oxygen carrying capacity and is cheaper since it is abundantly available. In terms of hydrogen production efficiency, tungsten oxide gives 4% better efficiency than iron oxide. While in terms of electrical power efficiency, iron oxide gives 4.6% better results than tungsten oxide. Overall, a TRCLR system with iron oxide is 2.6% more efficient and is cost effective than the TRCLR system with tungsten oxide.
Du Bois, Steve N; Johnson, Sarah E; Mustanski, Brian
2012-08-01
HIV disproportionately affects racial and ethnic minority young men who have sex with men (YMSM). HIV prevention research does not include these YMSM commensurate to their HIV burden. We examined racial and ethnic differences during a unique three-step recruitment process for an online, YMSM HIV prevention intervention study (N = 660). Step one was completed in-person; steps two and three online. Fewer Black and Latino YMSM completed step two-initiating online participation-than White YMSM. Internet use frequency accounted for the Latino versus White difference in initiating online participation, but not the Black versus White difference. Future online HIV prevention interventions recruiting diverse YMSM should focus on initiating online engagement among Black participants.
Du Bois, Steve N.; Johnson, Sarah E.; Mustanski, Brian
2011-01-01
HIV disproportionately affects racial and ethnic minority young men who have sex with men (YMSM). HIV prevention research does not include these YMSM commensurate to their HIV burden. We examined racial and ethnic differences during a unique three-step recruitment process for an online, YMSM HIV prevention intervention study (N=660). Step one was completed in-person; steps two and three online. Fewer Black and Latino YMSM completed step two – initiating online participation – than White YMSM. Internet use frequency accounted for the Latino vs. White difference in initiating online participation, but not the Black vs. White difference. Future online HIV prevention interventions recruiting diverse YMSM should focus on initiating online engagement among Black participants. PMID:21986869
Toward a General Research Process for Using Dubin's Theory Building Model
ERIC Educational Resources Information Center
Holton, Elwood F.; Lowe, Janis S.
2007-01-01
Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…
NASA Astrophysics Data System (ADS)
Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio
2017-11-01
Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
BEARKIMPE-2: A VBA Excel program for characterizing granular iron in treatability studies
NASA Astrophysics Data System (ADS)
Firdous, R.; Devlin, J. F.
2014-02-01
The selection of a suitable kinetic model to investigate the reaction rate of a contaminant with granular iron (GI) is essential to optimize the permeable reactive barrier (PRB) performance in terms of its reactivity. The newly developed Kinetic Iron Model (KIM) determines the surface rate constant (k) and sorption parameters (Cmax &J) which were not possible to uniquely identify previously. The code was written in Visual Basic (VBA), within Microsoft Excel, was adapted from earlier command line FORTRAN codes, BEARPE and KIMPE. The program is organized with several user interface screens (UserForms) that guide the user step by step through the analysis. BEARKIMPE-2 uses a non-linear optimization algorithm to calculate transport and chemical kinetic parameters. Both reactive and non-reactive sites are considered. A demonstration of the functionality of BEARKIMPE-2, with three nitroaromatic compounds showed that the differences in reaction rates for these compounds could be attributed to differences in their sorption behavior rather than their propensities to accept electrons in the reduction process.
Heat and Mass Transfer Model in Freeze-Dried Medium
NASA Astrophysics Data System (ADS)
Alfat, Sayahdin; Purqon, Acep
2017-07-01
There are big problems in agriculture sector every year. One of the major problems is abundance of agricultural product during the peak of harvest season that is not matched by an increase in demand of agricultural product by consumers, this causes a wasted agricultural products. Alternative way was food preservation by freeze dried method. This method was already using heat transfer through conduction and convection to reduce water quality in the food. The main objective of this research was to design a model heat and mass transfer in freeze-dried medium. We had two steps in this research, the first step was design of medium as the heat injection site and the second was simulate heat and mass transfer of the product. During simulation process, we use physical property of some agriculture product. The result will show how temperature and moisture distribution every second. The method of research use finite element method (FEM) and will be illustrated in three dimensional.
Moretti, Paul; Choubert, Jean-Marc; Canler, Jean-Pierre; Buffière, Pierre; Pétrimaux, Olivier; Lessard, Paul
2018-02-01
The integrated fixed-film activated sludge (IFAS) process is being increasingly used to enhance nitrogen removal for former activated sludge systems. The aim of this work is to evaluate a numerical model of a new nitrifying/denitrifying IFAS configuration. It consists of two carrier-free reactors (anoxic and aerobic) and one IFAS reactor with a filling ratio of 43% of carriers, followed by a clarifier. Simulations were carried out with GPS-X involving the nitrification reaction combined with a 1D heterogeneous biofilm model, including attachment/detachment processes. An original iterative calibration protocol was created comprising four steps and nine actions. Experimental campaigns were carried out to collect data on the pilot in operation, specifically for modelling purpose. The model used was able to predict properly the variations of the activated sludge (bulk) and the biofilm masses, the nitrification rates of both the activated sludge and the biofilm, and the nitrogen concentration in the effluent for short (4-10 days) and long (300 days) simulation runs. A calibrated parameter set is proposed (biokinetics, detachment, diffusion) related to the activated sludge, the biofilm and the effluent variables to enhance the model prediction on hourly and daily data sets.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
NASA Astrophysics Data System (ADS)
Magdy, Yehia M.; Altaher, Hossam; ElQada, E.
2018-03-01
In this research, the removal of 2,4 dinitrophenol, 2 nitrophenol and 4 nitrophenol from aqueous solution using char ash from animal bones was investigated using batch technique. Three 2-parameter isotherms (Freundlich, Langmuir, and Temkin) were applied to analyze the experimental data. Both linear and nonlinear regression analyses were performed for these models to estimate the isotherm parameters. Three 3-parameter isotherms (Redlich-Peterson, Sips, Toth) were also tested. Moreover, the kinetic data were tested using pseudo-first order, pseudo-second order, Elovich, Intraparticle diffusion and Boyd methods. Langmuir adsorption isotherm provided the best fit for the experimental data indicating monolayer adsorption. The maximum adsorption capacity was 8.624, 7.55, 7.384 mg/g for 2 nitrophenol, 2,4 dinitrophenol, and 4 nitrophenol, respectively. The experimental data fitted well to pseudo-second order model suggested a chemical nature of the adsorption process. The R 2 values for this model were 0.973 up to 0.999. This result with supported by the Temkin model indicating heat of adsorption to be greater than 10 kJ/mol. The rate controlling step was intraparticle diffusion for 2 nitrophenol, and a combination of intraparticle diffusion and film diffusion for the other two phenols. The pH and temperature of solution were found to have a considerable effect, and the temperature indicated the exothermic nature of the adsorption process. The highest adsorption capacity was obtained at pH 9 and 25 °C.
Using Microcomputers in School Administration. Fastback No. 248.
ERIC Educational Resources Information Center
Connors, Eugene T.; Valesky, Thomas C.
This "fastback" outlines the steps to take in computerizing school administration. After an introduction that lists the potential benefits of microcomputers in administrative offices, the booklet begins by delineating a three-step process for establishing an administrative computer system: (1) creating a district-level committee of administrators,…
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
10 Steps to Building an Architecture for Space Surveillance Projects
NASA Astrophysics Data System (ADS)
Gyorko, E.; Barnhart, E.; Gans, H.
Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to collect, and also the appropriate views to generate. The steps include 1) determining the context of the enterprise, including active elements and high level capabilities or goals; 2) determining the desired effects of the capabilities and mapping capabilities against the project plan; 3) determining operational performers and their inter-relationships; 4) building information and data dictionaries; 5) defining resources associated with capabilities; 6) determining the operational behavior necessary to achieve each capability; 7) analyzing existing or planned implementations to determine systems, services and software; 8) cross-referencing system behavior to operational behavioral; 9) documenting system threads and functional implementations; and 10) creating any required textual documentation from the model.
An Ecological Approach to Learning Dynamics
ERIC Educational Resources Information Center
Normak, Peeter; Pata, Kai; Kaipainen, Mauri
2012-01-01
New approaches to emergent learner-directed learning design can be strengthened with a theoretical framework that considers learning as a dynamic process. We propose an approach that models a learning process using a set of spatial concepts: learning space, position of a learner, niche, perspective, step, path, direction of a step and step…
Three-dimensional printing fiber reinforced hydrogel composites.
Bakarich, Shannon E; Gorkin, Robert; in het Panhuis, Marc; Spinks, Geoffrey M
2014-09-24
An additive manufacturing process that combines digital modeling and 3D printing was used to prepare fiber reinforced hydrogels in a single-step process. The composite materials were fabricated by selectively pattering a combination of alginate/acrylamide gel precursor solution and an epoxy based UV-curable adhesive (Emax 904 Gel-SC) with an extrusion printer. UV irradiation was used to cure the two inks into a single composite material. Spatial control of fiber distribution within the digital models allowed for the fabrication of a series of materials with a spectrum of swelling behavior and mechanical properties with physical characteristics ranging from soft and wet to hard and dry. A comparison with the "rule of mixtures" was used to show that the swollen composite materials adhere to standard composite theory. A prototype meniscus cartilage was prepared to illustrate the potential application in bioengineering.
Simulation of process identification and controller tuning for flow control system
NASA Astrophysics Data System (ADS)
Chew, I. M.; Wong, F.; Bono, A.; Wong, K. I.
2017-06-01
PID controller is undeniably the most popular method used in controlling various industrial processes. The feature to tune the three elements in PID has allowed the controller to deal with specific needs of the industrial processes. This paper discusses the three elements of control actions and improving robustness of controllers through combination of these control actions in various forms. A plant model is simulated using the Process Control Simulator in order to evaluate the controller performance. At first, the open loop response of the plant is studied by applying a step input to the plant and collecting the output data from the plant. Then, FOPDT of physical model is formed by using both Matlab-Simulink and PRC method. Then, calculation of controller’s setting is performed to find the values of Kc and τi that will give satisfactory control in closed loop system. Then, the performance analysis of closed loop system is obtained by set point tracking analysis and disturbance rejection performance. To optimize the overall physical system performance, a refined tuning of PID or detuning is further conducted to ensure a consistent resultant output of closed loop system reaction to the set point changes and disturbances to the physical model. As a result, the PB = 100 (%) and τi = 2.0 (s) is preferably chosen for setpoint tracking while PB = 100 (%) and τi = 2.5 (s) is selected for rejecting the imposed disturbance to the model. In a nutshell, selecting correlation tuning values is likewise depended on the required control’s objective for the stability performance of overall physical model.
Validation in the Absence of Observed Events.
Lathrop, John; Ezell, Barry
2016-04-01
This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.
Comparison of 1-step and 2-step methods of fitting microbiological models.
Jewell, Keith
2012-11-15
Previous conclusions that a 1-step fitting method gives more precise coefficients than the traditional 2-step method are confirmed by application to three different data sets. It is also shown that, in comparison to 2-step fits, the 1-step method gives better fits to the data (often substantially) with directly interpretable regression diagnostics and standard errors. The improvement is greatest at extremes of environmental conditions and it is shown that 1-step fits can indicate inappropriate functional forms when 2-step fits do not. 1-step fits are better at estimating primary parameters (e.g. lag, growth rate) as well as concentrations, and are much more data efficient, allowing the construction of more robust models on smaller data sets. The 1-step method can be straightforwardly applied to any data set for which the 2-step method can be used and additionally to some data sets where the 2-step method fails. A 2-step approach is appropriate for visual assessment in the early stages of model development, and may be a convenient way to generate starting values for a 1-step fit, but the 1-step approach should be used for any quantitative assessment. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chval, Zdeněk; Futera, Zdeněk; Burda, Jaroslav V.
2011-01-01
The hydration process for two Ru(II) representative half-sandwich complexes: Ru(arene)(pta)Cl2 (from the RAPTA family) and [Ru(arene)(en)Cl]+ (further labeled as Ru_en) were compared with analogous reaction of cisplatin. In the study, quantum chemical methods were employed. All the complexes were optimized at the B3LYP/6-31G(d) level using Conductor Polarizable Continuum Model (CPCM) solvent continuum model and single-point (SP) energy calculations and determination of electronic properties were performed at the B3LYP/6-311++G(2df,2pd)/CPCM level. It was found that the hydration model works fairly well for the replacement of the first chloride by water where an acceptable agreement for both Gibbs free energies and rate constants was obtained. However, in the second hydration step worse agreement of the experimental and calculated values was achieved. In agreement with experimental values, the rate constants for the first step can be ordered as RAPTA-B > Ru_en > cisplatin. The rate constants correlate well with binding energies (BEs) of the Pt/Ru-Cl bond in the reactant complexes. Substitution reactions on Ru_en and cisplatin complexes proceed only via pseudoassociative (associative interchange) mechanism. On the other hand in the case of RAPTA there is also possible a competitive dissociation mechanism with metastable pentacoordinated intermediate. The first hydration step is slightly endothermic for all three complexes by 3-5 kcal/mol. Estimated BEs confirm that the benzene ligand is relatively weakly bonded assuming the fact that it occupies three coordination positions of the Ru(II) cation.
Components of the Engulfment Machinery Have Distinct Roles in Corpse Processing
Meehan, Tracy L.; Joudi, Tony F.; Timmons, Allison K.; Taylor, Jeffrey D.; Habib, Corey S.; Peterson, Jeanne S.; Emmanuel, Shanan; Franc, Nathalie C.; McCall, Kimberly
2016-01-01
Billions of cells die in our bodies on a daily basis and are engulfed by phagocytes. Engulfment, or phagocytosis, can be broken down into five basic steps: attraction of the phagocyte, recognition of the dying cell, internalization, phagosome maturation, and acidification. In this study, we focus on the last two steps, which can collectively be considered corpse processing, in which the engulfed material is degraded. We use the Drosophila ovarian follicle cells as a model for engulfment of apoptotic cells by epithelial cells. We show that engulfed material is processed using the canonical corpse processing pathway involving the small GTPases Rab5 and Rab7. The phagocytic receptor Draper is present on the phagocytic cup and on nascent, phosphatidylinositol 3-phosphate (PI(3)P)- and Rab7-positive phagosomes, whereas integrins are maintained on the cell surface during engulfment. Due to the difference in subcellular localization, we investigated the role of Draper, integrins, and downstream signaling components in corpse processing. We found that some proteins were required for internalization only, while others had defects in corpse processing as well. This suggests that several of the core engulfment proteins are required for distinct steps of engulfment. We also performed double mutant analysis and found that combined loss of draper and αPS3 still resulted in a small number of engulfed vesicles. Therefore, we investigated another known engulfment receptor, Crq. We found that loss of all three receptors did not inhibit engulfment any further, suggesting that Crq does not play a role in engulfment by the follicle cells. A more complete understanding of how the engulfment and corpse processing machinery interact may enable better understanding and treatment of diseases associated with defects in engulfment by epithelial cells. PMID:27347682
Use of simulated data sets to evaluate the fidelity of metagenomic processing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavromatis, K; Ivanova, N; Barry, Kerrie
2007-01-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and twomore » sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less
Estimation of steady-state leakage current in polycrystalline PZT thin films
NASA Astrophysics Data System (ADS)
Podgorny, Yury; Vorotilov, Konstantin; Sigov, Alexander
2016-09-01
Estimation of the steady state (or "true") leakage current Js in polycrystalline ferroelectric PZT films with the use of the voltage-step technique is discussed. Curie-von Schweidler (CvS) and sum of exponents (Σ exp ) models are studied for current-time J (t) data fitting. Σ exp model (sum of three or two exponents) gives better fitting characteristics and provides good accuracy of Js estimation at reduced measurement time thus making possible to avoid film degradation, whereas CvS model is very sensitive to both start and finish time points and give in many cases incorrect results. The results give rise to suggest an existence of low-frequency relaxation processes in PZT films with characteristic duration of tens and hundreds of seconds.
Modeling Negotiation by a Paticipatory Approach
NASA Astrophysics Data System (ADS)
Torii, Daisuke; Ishida, Toru; Bousquet, François
In a participatory approach by social scientists, role playing games (RPG) are effectively used to understand real thinking and behavior of stakeholders, but RPG is not sufficient to handle a dynamic process like negotiation. In this study, a participatory simulation where user-controlled avatars and autonomous agents coexist is introduced to the participatory approach for modeling negotiation. To establish a modeling methodology of negotiation, we have tackled the following two issues. First, for enabling domain experts to concentrate interaction design for participatory simulation, we have adopted the architecture in which an interaction layer controls agents and have defined three types of interaction descriptions (interaction protocol, interaction scenario and avatar control scenario) to be described. Second, for enabling domain experts and stakeholders to capitalize on participatory simulation, we have established a four-step process for acquiring negotiation model: 1) surveys and interviews to stakeholders, 2) RPG, 3) interaction design, and 4) participatory simulation. Finally, we discussed our methodology through a case study of agricultural economics in the northeast Thailand.
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
How Long is my Toilet Roll?--A Simple Exercise in Mathematical Modelling
ERIC Educational Resources Information Center
Johnston, Peter R.
2013-01-01
The simple question of how much paper is left on my toilet roll is studied from a mathematical modelling perspective. As is typical with applied mathematics, models of increasing complexity are introduced and solved. Solutions produced at each step are compared with the solution from the previous step. This process exposes students to the typical…
Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches
NASA Astrophysics Data System (ADS)
H, Vathsala; Koolagudi, Shashidhar G.
2017-10-01
This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).
Preparation of Term Papers Based upon a Research-Process Model.
ERIC Educational Resources Information Center
Feldmann, Rodney Mansfield; Schloman, Barbara Frick
1990-01-01
Described is an alternative method of term paper preparation which provides a step-by-step sequence of assignments and provides feedback to the students at all stages in the preparation of the report. An example of this model is provided including 13 sequential assignments. (CW)
Least-squares finite element solutions for three-dimensional backward-facing step flow
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Hou, Lin-Jun; Lin, Tsung-Liang
1993-01-01
Comprehensive numerical solutions of the steady state incompressible viscous flow over a three-dimensional backward-facing step up to Re equals 800 are presented. The results are obtained by the least-squares finite element method (LSFEM) which is based on the velocity-pressure-vorticity formulation. The computed model is of the same size as that of Armaly's experiment. Three-dimensional phenomena are observed even at low Reynolds number. The calculated values of the primary reattachment length are in good agreement with experimental results.
Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing
Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin
2016-01-01
A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudsen, J.K.; Smith, C.L.
The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more thanmore » one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.« less
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
Campo, Shelly; Askelson, Natoshia M; Spies, Erica L; Losch, Mary
2012-01-01
Unintended pregnancy among women in the 18-30 age group is a public health concern. The Extended Parallel Process Model (EPPM) provides a framework for exploring how women's perceptions of threat, efficacy, and fear influence intentions to use contraceptives. Past use and communication with best friends and partners were also considered. A telephone survey of 18-30-year-old women (N = 599) was completed. After univariate and bivariate analyses were conducted, the variables were entered into a hierarchal, multi-variate linear regression with three steps consistent with the EPPM to predict behavioral intention. The first step included the demographic variables of relationship status and income. The constructs for the EPPM were entered into step 2. Step 3 contained the fear measure. The model for the third step was significant, F(10,471) = 36.40, p < 0.001 and the variance explained by this complete model was 0.42. Results suggest that perceived severity of the consequences of an unintended pregnancy (p < 0.01), communication with friends (p < 0.01) and last sexual partner (p < 0.05), relationship status (p < 0.01), and past use (p < 0.001) were associated with women's intentions to use contraceptives. A woman's perception of the severity was related to her intention to use contraceptives. Half of the women (50.3%) reported ambivalence about the severity of an unintended pregnancy. In our study, talking with their last sexual partner had a positive effect on intentions to use contraceptives, while talking with friends influenced intentions in a negative direction. These results reconfirm the need for public health practitioners and health care providers to consider level of ambivalence toward unintended pregnancy, communication with partner, and relationship status when trying to improve women's contraceptive behaviors. Implications for effective communication interventions are discussed.
Model for Simulating a Spiral Software-Development Process
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Curley, Charles; Nayak, Umanath
2010-01-01
A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.
Imagining roles for epigenetics in health promotion research.
McBride, Colleen M; Koehly, Laura M
2017-04-01
Discoveries from the Human Genome Project have invigorated discussions of epigenetic effects-modifiable chemical processes that influence DNA's ability to give instructions to turn gene expression on or off-on health outcomes. We suggest three domains in which new understandings of epigenetics could inform innovations in health promotion research: (1) increase the motivational potency of health communications (e.g., explaining individual differences in health outcomes to interrupt optimistic biases about health exposures); (2) illuminate new approaches to targeted and tailored health promotion interventions (e.g., relapse prevention targeted to epigenetic responses to intervention participation); and (3) inform more sensitive measures of intervention impact, (e.g., replace or augment self-reported adherence). We suggest a three-step process for using epigenetics in health promotion research that emphasizes integrating epigenetic mechanisms into conceptual model development that then informs selection of intervention approaches and outcomes. Lastly, we pose examples of relevant scientific questions worth exploring.
Interactive computer simulations of knee-replacement surgery.
Gunther, Stephen B; Soto, Gabriel E; Colman, William W
2002-07-01
Current surgical training programs in the United States are based on an apprenticeship model. This model is outdated because it does not provide conceptual scaffolding, promote collaborative learning, or offer constructive reinforcement. Our objective was to create a more useful approach by preparing students and residents for operative cases using interactive computer simulations of surgery. Total-knee-replacement surgery (TKR) is an ideal procedure to model on the computer because there is a systematic protocol for the procedure. Also, this protocol is difficult to learn by the apprenticeship model because of the multiple instruments that must be used in a specific order. We designed an interactive computer tutorial to teach medical students and residents how to perform knee-replacement surgery. We also aimed to reinforce the specific protocol of the operative procedure. Our final goal was to provide immediate, constructive feedback. We created a computer tutorial by generating three-dimensional wire-frame models of the surgical instruments. Next, we applied a surface to the wire-frame models using three-dimensional modeling. Finally, the three-dimensional models were animated to simulate the motions of an actual TKR. The tutorial is a step-by-step tutorial that teaches and tests the correct sequence of steps in a TKR. The student or resident must select the correct instruments in the correct order. The learner is encouraged to learn the stepwise surgical protocol through repetitive use of the computer simulation. Constructive feedback is acquired through a grading system, which rates the student's or resident's ability to perform the task in the correct order. The grading system also accounts for the time required to perform the simulated procedure. We evaluated the efficacy of this teaching technique by testing medical students who learned by the computer simulation and those who learned by reading the surgical protocol manual. Both groups then performed TKR on manufactured bone models using real instruments. Their technique was graded with the standard protocol. The students who learned on the computer simulation performed the task in a shorter time and with fewer errors than the control group. They were also more engaged in the learning process. Surgical training programs generally lack a consistent approach to preoperative education related to surgical procedures. This interactive computer tutorial has allowed us to make a quantum leap in medical student and resident teaching in our orthopedic department because the students actually participate in the entire process. Our technique provides a linear, sequential method of skill acquisition and direct feedback, which is ideally suited for learning stepwise surgical protocols. Since our initial evaluation has shown the efficacy of this program, we have implemented this teaching tool into our orthopedic curriculum. Our plans for future work with this simulator include modeling procedures involving other anatomic areas of interest, such as the hip and shoulder.
A service relation model for web-based land cover change detection
NASA Astrophysics Data System (ADS)
Xing, Huaqiao; Chen, Jun; Wu, Hao; Zhang, Jun; Li, Songnian; Liu, Boyu
2017-10-01
Change detection with remotely sensed imagery is a critical step in land cover monitoring and updating. Although a variety of algorithms or models have been developed, none of them can be universal for all cases. The selection of appropriate algorithms and construction of processing workflows depend largely on the expertise of experts about the "algorithm-data" relations among change detection algorithms and the imagery data used. This paper presents a service relation model for land cover change detection by integrating the experts' knowledge about the "algorithm-data" relations into the web-based geo-processing. The "algorithm-data" relations are mapped into a set of web service relations with the analysis of functional and non-functional service semantics. These service relations are further classified into three different levels, i.e., interface, behavior and execution levels. A service relation model is then established using the Object and Relation Diagram (ORD) approach to represent the multi-granularity services and their relations for change detection. A set of semantic matching rules are built and used for deriving on-demand change detection service chains from the service relation model. A web-based prototype system is developed in .NET development environment, which encapsulates nine change detection and pre-processing algorithms and represents their service relations as an ORD. Three test areas from Shandong and Hebei provinces, China with different imagery conditions are selected for online change detection experiments, and the results indicate that on-demand service chains can be generated according to different users' demands.
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
ERIC Educational Resources Information Center
Wilder, David A.; Myers, Kristin; Fischetti, Anthony; Leon, Yanerys; Nicholson, Katie; Allison, Janelle
2012-01-01
After a 3-step guided compliance procedure (vocal prompt, vocal plus model prompt, vocal prompt plus physical guidance) did not increase compliance, we evaluated 2 modifications with 4 preschool children who exhibited noncompliance. The first modification consisted of omission of the model prompt, and the second modification consisted of omitting…
Construction of moment-matching multinomial lattices using Vandermonde matrices and Gröbner bases
NASA Astrophysics Data System (ADS)
Lundengârd, Karl; Ogutu, Carolyne; Silvestrov, Sergei; Ni, Ying; Weke, Patrick
2017-01-01
In order to describe and analyze the quantitative behavior of stochastic processes, such as the process followed by a financial asset, various discretization methods are used. One such set of methods are lattice models where a time interval is divided into equal time steps and the rate of change for the process is restricted to a particular set of values in each time step. The well-known binomial- and trinomial models are the most commonly used in applications, although several kinds of higher order models have also been examined. Here we will examine various ways of designing higher order lattice schemes with different node placements in order to guarantee moment-matching with the process.
Guzmán, Wilda Z; Gely, María I; Crespo, Kathleen; Matos, José R; Sánchez, Nilda; Guerrero, Lidia M
2011-04-01
A revision of the clinical assessment system of the University of Puerto Rico School of Dental Medicine was initiated in 2007, with the goal of achieving a system that would be fully understood and used by both faculty and students to improve student performance throughout the curriculum. The transformation process was organized according to Kotter's Eight-Step Change Model. Some of the initial findings in 2007 were as follows: 87 percent of current daily clinical evaluations were scored at the scale's highest level, 33 percent of faculty members lacked knowledge of the evaluation system, and 60 percent of students reported that faculty members were not well calibrated. As a result of the transformation process, a pilot project has been implemented in the comprehensive clinical course for senior students. The revised assessment methods utilized are verbal daily feedback, clinical evaluations once every three months, a digital portfolio, and competency exams. There is also a productivity component included in the course grade. We conclude that adapting Kotter's model for use in the transformation process has been very useful; gaining support from both the administration and faculty has been essential; and the provision of continuous faculty development activities has been empowering. The American Dental Education Association Commission on Change and Innovation in Dental Education (ADEA CCI) Liaisons at the University of Puerto Rico School of Dental Medicine have been effective in producing a greater awareness among the faculty about the value of the competency-based curriculum and the need for change.
Modeling Woven Polymer Matrix Composites with MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M. (Technical Monitor)
2000-01-01
NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) is used to predict the elastic properties of plain weave polymer matrix composites (PMCs). The traditional one step three-dimensional homogertization procedure that has been used in conjunction with MAC/GMC for modeling woven composites in the past is inaccurate due to the lack of shear coupling inherent to the model. However, by performing a two step homogenization procedure in which the woven composite repeating unit cell is homogenized independently in the through-thickness direction prior to homogenization in the plane of the weave, MAC/GMC can now accurately model woven PMCs. This two step procedure is outlined and implemented, and predictions are compared with results from the traditional one step approach and other models and experiments from the literature. Full coupling of this two step technique with MAC/ GMC will result in a widely applicable, efficient, and accurate tool for the design and analysis of woven composite materials and structures.
Implementing team huddles in small rural hospitals: How does the Kotter model of change apply?
Baloh, Jure; Zhu, Xi; Ward, Marcia M
2017-12-17
To examine how the process of change prescribed in Kotter's change model applies in implementing team huddles, and to assess the impact of the execution of early change phases on change success in later phases. Kotter's model can help to guide hospital leaders to implement change and potentially to improve success rates. However, the model is under studied, particularly in health care. We followed eight hospitals implementing team huddles for 2 years, interviewing the change teams quarterly to inquire about implementation progress. We assessed how the hospitals performed in the three overarching phases of the Kotter model, and examined whether performance in the initial phase influenced subsequent performance. In half of the hospitals, change processes were congruent with Kotter's model, where performance in the initial phase influenced their success in subsequent phases. In other hospitals, change processes were incongruent with the model, and their success depended on implementation scope and the strategies employed. We found mixed support for the Kotter model. It better fits implementation that aims to spread to multiple hospital units. When the scope is limited, changes can be successful even when steps are skipped. Kotter's model can be a useful guide for nurse managers implementing changes. © 2017 John Wiley & Sons Ltd.
Fundamental Study of Material Flow in Friction Stir Welds
NASA Technical Reports Server (NTRS)
Reynolds, Anthony P.
1999-01-01
The presented research project consists of two major parts. First, the material flow in solid-state, friction stir, butt-welds as been investigated using a marker insert technique. Changes in material flow due to welding parameter as well as tool geometry variations have been examined for different materials. The method provides a semi-quantitative, three-dimensional view of the material transport in the welded zone. Second, a FSW process model has been developed. The fully coupled model is based on fluid mechanics; the solid-state material transport during welding is treated as a laminar, viscous flow of a non-Newtonian fluid past a rotating circular cylinder. The heat necessary for the material softening is generated by deformation of the material. As a first step, a two-dimensional model, which contains only the pin of the FSW tool, has been created to test the suitability of the modeling approach and to perform parametric studies of the boundary conditions. The material flow visualization experiments agree very well with the predicted flow field. Accordingly, material within the pin diameter is transported only in the rotation direction around the pin. Due to the simplifying assumptions inherent in the 2-D model, other experimental data such as forces on the pin, torque, and weld energy cannot be directly used for validation. However, the 2-D model predicts the same trends as shown in the experiments. The model also predicts a deviation from the "normal" material flow at certain combinations of welding parameters, suggesting a possible mechanism for the occurrence of some typical FSW defects. The next step has been the development of a three-dimensional process model. The simplified FSW tool has been designed as a flat shoulder rotating on the top of the workpiece and a rotating, cylindrical pin, which extends throughout the total height of the flow domain. The thermal boundary conditions at the tool and at the contact area to the backing plate have been varied to fit experimental data such as temperature profiles, torque and tool forces. General aspects of the experimentally visualized material flow pattern are confirmed by the 3-D model.
Tjolleng, Amir; Jung, Kihyo; Hong, Wongi; Lee, Wonsup; Lee, Baekhee; You, Heecheon; Son, Joonwoo; Park, Seikwon
2017-03-01
An artificial neural network (ANN) model was developed in the present study to classify the level of a driver's cognitive workload based on electrocardiography (ECG). ECG signals were measured on 15 male participants while they performed a simulated driving task as a primary task with/without an N-back task as a secondary task. Three time-domain ECG measures (mean inter-beat interval (IBI), standard deviation of IBIs, and root mean squared difference of adjacent IBIs) and three frequencydomain ECG measures (power in low frequency, power in high frequency, and ratio of power in low and high frequencies) were calculated. To compensate for individual differences in heart response during the driving tasks, a three-step data processing procedure was performed to ECG signals of each participant: (1) selection of two most sensitive ECG measures, (2) definition of three (low, medium, and high) cognitive workload levels, and (3) normalization of the selected ECG measures. An ANN model was constructed using a feed-forward network and scaled conjugate gradient as a back-propagation learning rule. The accuracy of the ANN classification model was found satisfactory for learning data (95%) and testing data (82%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.
2003-01-01
A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.
NASA Technical Reports Server (NTRS)
Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat
2008-01-01
This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.
Tying Resource Allocation and TQM into Planning and Assessment Efforts.
ERIC Educational Resources Information Center
Mullendore, Richard H.; Wang, Li-Shing
1996-01-01
Describes the evolution of a model, developed by student affairs officials, which outlines a planning process for implementing Total Quality Management. Presents step-by-step instructions for the model's deployment and discusses such issues as transitions, planning forms, goals, and professional and personal growth needs. (RJM)
Dynamic Modeling of the Main Blow in Basic Oxygen Steelmaking Using Measured Step Responses
NASA Astrophysics Data System (ADS)
Kattenbelt, Carolien; Roffel, B.
2008-10-01
In the control and optimization of basic oxygen steelmaking, it is important to have an understanding of the influence of control variables on the process. However, important process variables such as the composition of the steel and slag cannot be measured continuously. The decarburization rate and the accumulation rate of oxygen, which can be derived from the generally measured waste gas flow and composition, are an indication of changes in steel and slag composition. The influence of the control variables on the decarburization rate and the accumulation rate of oxygen can best be determined in the main blow period. In this article, the measured step responses of the decarburization rate and the accumulation rate of oxygen to step changes in the oxygen blowing rate, lance height, and the addition rate of iron ore during the main blow are presented. These measured step responses are subsequently used to develop a dynamic model for the main blow. The model consists of an iron oxide and a carbon balance and an additional equation describing the influence of the lance height and the oxygen blowing rate on the decarburization rate. With this simple dynamic model, the measured step responses can be explained satisfactorily.
Mathematical Modeling of Nitrous Oxide Production during Denitrifying Phosphorus Removal Process.
Liu, Yiwen; Peng, Lai; Chen, Xueming; Ni, Bing-Jie
2015-07-21
A denitrifying phosphorus removal process undergoes frequent alternating anaerobic/anoxic conditions to achieve phosphate release and uptake, during which microbial internal storage polymers (e.g., Polyhydroxyalkanoate (PHA)) could be produced and consumed dynamically. The PHA turnovers play important roles in nitrous oxide (N2O) accumulation during the denitrifying phosphorus removal process. In this work, a mathematical model is developed to describe N2O dynamics and the key role of PHA consumption on N2O accumulation during the denitrifying phosphorus removal process for the first time. In this model, the four-step anoxic storage of polyphosphate and four-step anoxic growth on PHA using nitrate, nitrite, nitric oxide (NO), and N2O consecutively by denitrifying polyphosphate accumulating organisms (DPAOs) are taken into account for describing all potential N2O accumulation steps in the denitrifying phosphorus removal process. The developed model is successfully applied to reproduce experimental data on N2O production obtained from four independent denitrifying phosphorus removal study reports with different experimental conditions. The model satisfactorily describes the N2O accumulation, nitrogen reduction, phosphate release and uptake, and PHA dynamics for all systems, suggesting the validity and applicability of the model. The results indicated a substantial role of PHA consumption in N2O accumulation due to the relatively low N2O reduction rate by using PHA during denitrifying phosphorus removal.
The Industrial Process System Assessment (IPSA) methodology is a multiple step allocation approach for connecting information from the production line level up to the facility level and vice versa using a multiscale model of process systems. The allocation procedure assigns inpu...
Le management des projets scientifiques
NASA Astrophysics Data System (ADS)
Perrier, Françoise
2000-12-01
We describe in this paper a new approach for the management of scientific projects. This approach is the result of a long reflexion carried out within the MQDP (Methodology and Quality in the Project Development) group of INSU-CNRS, and continued with Guy Serra. Our reflexion was initiated with the study of the so-called `North-American Paradigm' which was, initially considered as the only relevant management model. Through our active participation in several astrophysical projects we realized that this model could not be applied to our laboratories without major modifications. Therefore, step-by-step, we have constructed our own methodology, using to the fullest human potential resources existing in our research field, their habits and skills. We have also participated in various working groups in industrial and scientific organisms for the benefits of CNRS. The management model presented here is based on a systemic and complex approach. This approach lets us describe the multiple aspects of a scientific project specially taking into account the human dimension. The project system model includes three major interconnected systems, immersed within an influencing and influenced environment: the `System to be Realized' which defines scientific and technical tasks leading to the scientific goals, the `Realizing System' which describes procedures, processes and organization, and the `Actors' System' which implements and boosts all the processes. Each one exists only through a series of successive models, elaborated at predefined dates of the project called `key-points'. These systems evolve with time and under often-unpredictable circumstances and the models have to take it into account. At these key-points, each model is compared to reality and the difference between the predicted and realized tasks is evaluated in order to define the data for the next model. This model can be applied to any kind of projects.
Data-based control of a multi-step forming process
NASA Astrophysics Data System (ADS)
Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.
2017-09-01
The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.
On the biophysics and kinetics of toehold-mediated DNA strand displacement
Srinivas, Niranjan; Ouldridge, Thomas E.; Šulc, Petr; Schaeffer, Joseph M.; Yurke, Bernard; Louis, Ard A.; Doye, Jonathan P. K.; Winfree, Erik
2013-01-01
Dynamic DNA nanotechnology often uses toehold-mediated strand displacement for controlling reaction kinetics. Although the dependence of strand displacement kinetics on toehold length has been experimentally characterized and phenomenologically modeled, detailed biophysical understanding has remained elusive. Here, we study strand displacement at multiple levels of detail, using an intuitive model of a random walk on a 1D energy landscape, a secondary structure kinetics model with single base-pair steps and a coarse-grained molecular model that incorporates 3D geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Two factors explain the dependence of strand displacement kinetics on toehold length: (i) the physical process by which a single step of branch migration occurs is significantly slower than the fraying of a single base pair and (ii) initiating branch migration incurs a thermodynamic penalty, not captured by state-of-the-art nearest neighbor models of DNA, due to the additional overhang it engenders at the junction. Our findings are consistent with previously measured or inferred rates for hybridization, fraying and branch migration, and they provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems. PMID:24019238
On the biophysics and kinetics of toehold-mediated DNA strand displacement.
Srinivas, Niranjan; Ouldridge, Thomas E; Sulc, Petr; Schaeffer, Joseph M; Yurke, Bernard; Louis, Ard A; Doye, Jonathan P K; Winfree, Erik
2013-12-01
Dynamic DNA nanotechnology often uses toehold-mediated strand displacement for controlling reaction kinetics. Although the dependence of strand displacement kinetics on toehold length has been experimentally characterized and phenomenologically modeled, detailed biophysical understanding has remained elusive. Here, we study strand displacement at multiple levels of detail, using an intuitive model of a random walk on a 1D energy landscape, a secondary structure kinetics model with single base-pair steps and a coarse-grained molecular model that incorporates 3D geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Two factors explain the dependence of strand displacement kinetics on toehold length: (i) the physical process by which a single step of branch migration occurs is significantly slower than the fraying of a single base pair and (ii) initiating branch migration incurs a thermodynamic penalty, not captured by state-of-the-art nearest neighbor models of DNA, due to the additional overhang it engenders at the junction. Our findings are consistent with previously measured or inferred rates for hybridization, fraying and branch migration, and they provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.
Danker, Jared F; Anderson, John R
2007-04-15
In naturalistic algebra problem solving, the cognitive processes of representation and retrieval are typically confounded, in that transformations of the equations typically require retrieval of mathematical facts. Previous work using cognitive modeling has associated activity in the prefrontal cortex with the retrieval demands of algebra problems and activity in the posterior parietal cortex with the transformational demands of algebra problems, but these regions tend to behave similarly in response to task manipulations (Anderson, J.R., Qin, Y., Sohn, M.-H., Stenger, V.A., Carter, C.S., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261; Qin, Y., Carter, C.S., Silk, E.M., Stenger, A., Fissell, K., Goode, A., Anderson, J.R., 2004. The change of brain activation patterns as children learn algebra equation solving. Proc. Natl. Acad. Sci. 101, 5686-5691). With this study we attempt to isolate activity in these two regions by using a multi-step algebra task in which transformation (parietal) is manipulated in the first step and retrieval (prefrontal) is manipulated in the second step. Counter to our initial predictions, both brain regions were differentially active during both steps. We designed two cognitive models, one encompassing our initial assumptions and one in which both processes were engaged during both steps. The first model provided a poor fit to the behavioral and neural data, while the second model fit both well. This simultaneously emphasizes the strong relationship between retrieval and representation in mathematical reasoning and demonstrates that cognitive modeling can serve as a useful tool for understanding task manipulations in neuroimaging experiments.
Temporal differentiation and the optimization of system output
NASA Astrophysics Data System (ADS)
Tannenbaum, Emmanuel
2008-01-01
We develop two simplified dynamical models with which to explore the conditions under which temporal differentiation leads to increased system output. By temporal differentiation, we mean a division of labor whereby different subtasks associated with performing a given task are done at different times. The idea is that, by focusing on one particular set of subtasks at a time, it is possible to increase the efficiency with which each subtask is performed, thereby allowing for faster completion of the overall task. In the first model, we consider the filling and emptying of a tank in the presence of a time-varying resource profile. If a given resource is available, the tank may be filled at some rate rf . As long as the tank contains a resource, it may be emptied at a rate re , corresponding to processing into some product, which is either the final product of a process or an intermediate that is transported for further processing. Given a resource-availability profile over some time interval T , we develop an algorithm for determining the fill-empty profile that produces the maximum quantity of processed resource at the end of the time interval. We rigorously prove that the basic algorithm is one where the tank is filled when a resource is available and emptied when a resource is not available. In the second model, we consider a process whereby some resource is converted into some final product in a series of three agent-mediated steps. Temporal differentiation is incorporated by allowing the agents to oscillate between performing the first two steps and performing the last step. We find that temporal differentiation is favored when the number of agents is at intermediate values and when there are process intermediates that have long lifetimes compared to other characteristic time scales in the system. Based on these results, we speculate that temporal differentiation may provide an evolutionary basis for the emergence of phenomena such as sleep, distinct REM and non-REM sleep states, and circadian rhythms in general. The essential argument is that in sufficiently complex biological systems, a maximal amount of information and tasks can be processed and completed if the system follows a temporally differentiated “work plan,” whereby the system focuses on one or a few tasks at a time.
Wang, Guangye; Huang, Wenjun; Song, Qi; Liang, Jinfeng
2017-11-01
This study aims to analyze the contact areas and pressure distributions between the femoral head and mortar during normal walking using a three-dimensional finite element model (3D-FEM). Computed tomography (CT) scanning technology and a computer image processing system were used to establish the 3D-FEM. The acetabular mortar model was used to simulate the pressures during 32 consecutive normal walking phases and the contact areas at different phases were calculated. The distribution of the pressure peak values during the 32 consecutive normal walking phases was bimodal, which reached the peak (4.2 Mpa) at the initial phase where the contact area was significantly higher than that at the stepping phase. The sites that always kept contact were concentrated on the acetabular top and leaned inwards, while the anterior and posterior acetabular horns had no pressure concentration. The pressure distributions of acetabular cartilage at different phases were significantly different, the zone of increased pressure at the support phase distributed at the acetabular top area, while that at the stepping phase distributed in the inside of acetabular cartilage. The zones of increased contact pressure and the distributions of acetabular contact areas had important significance towards clinical researches, and could indicate the inductive factors of acetabular osteoarthritis. Copyright © 2016. Published by Elsevier Taiwan.
A method for scenario-based risk assessment for robust aerospace systems
NASA Astrophysics Data System (ADS)
Thomas, Victoria Katherine
In years past, aircraft conceptual design centered around creating a feasible aircraft that could be built and could fly the required missions. More recently, aircraft viability entered into conceptual design, allowing that the product's potential to be profitable should also be examined early in the design process. While examining an aerospace system's feasibility and viability early in the design process is extremely important, it is also important to examine system risk. In traditional aerospace systems risk analysis, risk is examined from the perspective of performance, schedule, and cost. Recently, safety and reliability analysis have been brought forward in the design process to also be examined during late conceptual and early preliminary design. While these analyses work as designed, existing risk analysis methods and techniques are not designed to examine an aerospace system's external operating environment and the risks present there. A new method has been developed here to examine, during the early part of concept design, the risk associated with not meeting assumptions about the system's external operating environment. The risks are examined in five categories: employment, culture, government and politics, economics, and technology. The risks are examined over a long time-period, up to the system's entire life cycle. The method consists of eight steps over three focus areas. The first focus area is Problem Setup. During problem setup, the problem is defined and understood to the best of the decision maker's ability. There are four steps in this area, in the following order: Establish the Need, Scenario Development, Identify Solution Alternatives, and Uncertainty and Risk Identification. There is significant iteration between steps two through four. Focus area two is Modeling and Simulation. In this area the solution alternatives and risks are modeled, and a numerical value for risk is calculated. A risk mitigation model is also created. The four steps involved in completing the modeling and simulation are: Alternative Solution Modeling, Uncertainty Quantification, Risk Assessment, and Risk Mitigation. Focus area three consists of Decision Support. In this area a decision support interface is created that allows for game playing between solution alternatives and risk mitigation. A multi-attribute decision making process is also implemented to aid in decision making. A demonstration problem inspired by Airbus' mid 1980s decision to break into the widebody long-range market was developed to illustrate the use of this method. The results showed that the method is able to capture additional types of risk than previous analysis methods, particularly at the early stages of aircraft design. It was also shown that the method can be used to help create a system that is robust to external environmental factors. The addition of an external environment risk analysis in the early stages of conceptual design can add another dimension to the analysis of feasibility and viability. The ability to take risk into account during the early stages of the design process can allow for the elimination of potentially feasible and viable but too-risky alternatives. The addition of a scenario-based analysis instead of a traditional probabilistic analysis enabled uncertainty to be effectively bound and examined over a variety of potential futures instead of only a single future. There is also potential for a product to be groomed for a specific future that one believes is likely to happen, or for a product to be steered during design as the future unfolds.
Physical Human Activity Recognition Using Wearable Sensors.
Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine
2015-12-11
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.
Physical Human Activity Recognition Using Wearable Sensors
Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine
2015-01-01
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors’ placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject. PMID:26690450
Three Empirical Strategies for Teaching Statistics
ERIC Educational Resources Information Center
Marson, Stephen M.
2007-01-01
This paper employs a three-step process to analyze three empirically supported strategies for teaching statistics to BSW students. The strategies included: repetition, immediate feedback, and use of original data. First, each strategy is addressed through the literature. Second, the application of employing each of the strategies over the period…
Application of a 2-step process for the biological treatment of sulfidic spent caustics.
de Graaff, Marco; Klok, Johannes B M; Bijmans, Martijn F M; Muyzer, Gerard; Janssen, Albert J H
2012-03-01
This research demonstrates the feasibility and advantages of a 2-step process for the biological treatment of sulfidic spent caustics under halo-alkaline conditions (i.e. pH 9.5; Na(+) = 0.8 M). Experiments with synthetically prepared solutions were performed in a continuously fed system consisting of two gas-lift reactors in series operated at aerobic conditions at 35 °C. The detoxification of sulfide to thiosulfate in the first step allowed the successful biological treatment of total-S loading rates up to 33 mmol L(-1) day(-1). In the second, biological step, the remaining sulfide and thiosulfate was completely converted to sulfate by haloalkaliphilic sulfide oxidizing bacteria. Mathematical modeling of the 2-step process shows that under the prevailing conditions an optimal reactor configuration consists of 40% 'abiotic' and 60% 'biological' volume, whilst the total reactor volume is 22% smaller than for the 1-step process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wilmanski, Tomasz; Barnard, Alle; Parikh, Mukti R; Kirshner, Julia; Buhman, Kimberly; Burgess, John; Teegarden, Dorothy
2016-10-01
Breast cancer metastasis to the bone continues to be a major health problem, with approximately 80% of advanced breast cancer patients expected to develop bone metastasis. Although the problem of bone metastasis persists, current treatment options for metastatic cancer patients are limited. In this study, we investigated the preventive role of the active vitamin D metabolite, 1α,25-dihydroxyvitamin D (1,25(OH)2D), against the metastatic potential of breast cancer cells using a novel three-dimensional model (rMET) recapitulating multiple steps of the bone metastatic process. Treatment of MCF10CA1a and MDA-MB-231 cells inhibited metastasis in the rMET model by 70% (±5.7%) and 21% (±6%), respectively. In addition, 1,25(OH)2D treatment decreased invasiveness (20 ± 11% of vehicle) and decreased the capability of MCF10CA1a cells to survive in the reconstructed bone environment after successful invasion through the basement membrane (69 ± 5% of vehicle). An essential step in metastasis is epithelial-mesenchymal transition (EMT). Treatment of MCF10CA1a cells with 1,25(OH)2D increased gene (2.04 ± 0.28-fold increase) and protein (1.87 ± 0.20-fold increase) expression of E-cadherin. Additionally, 1,25(OH)2D treatment decreased N-cadherin gene expression (42 ± 8% decrease), a marker for EMT. Collectively, the present study suggests that 1,25(OH)2D inhibits breast cancer cell metastatic capability as well as inhibits EMT, an essential step in the metastatic process.
Gkigkitzis, Ioannis
2013-01-01
The aim of this report is to provide a mathematical model of the mechanism for making binary fate decisions about cell death or survival, during and after Photodynamic Therapy (PDT) treatment, and to supply the logical design for this decision mechanism as an application of rate distortion theory to the biochemical processing of information by the physical system of a cell. Based on system biology models of the molecular interactions involved in the PDT processes previously established, and regarding a cellular decision-making system as a noisy communication channel, we use rate distortion theory to design a time dependent Blahut-Arimoto algorithm where the input is a stimulus vector composed of the time dependent concentrations of three PDT related cell death signaling molecules and the output is a cell fate decision. The molecular concentrations are determined by a group of rate equations. The basic steps are: initialize the probability of the cell fate decision, compute the conditional probability distribution that minimizes the mutual information between input and output, compute the cell probability of cell fate decision that minimizes the mutual information and repeat the last two steps until the probabilities converge. Advance to the next discrete time point and repeat the process. Based on the model from communication theory described in this work, and assuming that the activation of the death signal processing occurs when any of the molecular stimulants increases higher than a predefined threshold (50% of the maximum concentrations), for 1800s of treatment, the cell undergoes necrosis within the first 30 minutes with probability range 90.0%-99.99% and in the case of repair/survival, it goes through apoptosis within 3-4 hours with probability range 90.00%-99.00%. Although, there is no experimental validation of the model at this moment, it reproduces some patterns of survival ratios of predicted experimental data. Analytical modeling based on cell death signaling molecules has been shown to be an independent and useful tool for prediction of cell surviving response to PDT. The model can be adjusted to provide important insights for cellular response to other treatments such as hyperthermia, and diseases such as neurodegeneration.
Audiovisual integration increases the intentional step synchronization of side-by-side walkers.
Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A
2017-12-01
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Ni, Bing-Jie; Ruscalleda, Maël; Pellicer-Nàcher, Carles; Smets, Barth F
2011-09-15
Nitrous oxide (N(2)O) can be formed during biological nitrogen (N) removal processes. In this work, a mathematical model is developed that describes N(2)O production and consumption during activated sludge nitrification and denitrification. The well-known ASM process models are extended to capture N(2)O dynamics during both nitrification and denitrification in biological N removal. Six additional processes and three additional reactants, all involved in known biochemical reactions, have been added. The validity and applicability of the model is demonstrated by comparing simulations with experimental data on N(2)O production from four different mixed culture nitrification and denitrification reactor study reports. Modeling results confirm that hydroxylamine oxidation by ammonium oxidizers (AOB) occurs 10 times slower when NO(2)(-) participates as final electron acceptor compared to the oxic pathway. Among the four denitrification steps, the last one (N(2)O reduction to N(2)) seems to be inhibited first when O(2) is present. Overall, N(2)O production can account for 0.1-25% of the consumed N in different nitrification and denitrification systems, which can be well simulated by the proposed model. In conclusion, we provide a modeling structure, which adequately captures N(2)O dynamics in autotrophic nitrification and heterotrophic denitrification driven biological N removal processes and which can form the basis for ongoing refinements.
NASA Astrophysics Data System (ADS)
Guan, Mingfu; Ahilan, Sangaralingam; Yu, Dapeng; Peng, Yong; Wright, Nigel
2018-01-01
Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological functioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydro-morphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection-diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model's capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
Bibliographic Instruction in a Step-by-Step Approach.
ERIC Educational Resources Information Center
Soash, Richard L.
1992-01-01
Describes an information search process based on Kuhlthau's model that was used to teach bibliographic research to ninth grade students. A research test to ensure that students are familiar with basic library skills is presented, forms for helping students narrow the topic and evaluate materials are provided, and a research process checklist is…
ERIC Educational Resources Information Center
Frazier, Thomas W.; Youngstrom, Eric A.
2006-01-01
In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…
A new MRI land surface model HAL
NASA Astrophysics Data System (ADS)
Hosaka, M.
2011-12-01
A land surface model HAL is newly developed for MRI-ESM1. It is used for the CMIP simulations. HAL consists of three submodels: SiByl (vegetation), SNOWA (snow) and SOILA (soil) in the current version. It also contains a land coupler LCUP which connects some submodels and an atmospheric model. The vegetation submodel SiByl has surface vegetation processes similar to JMA/SiB (Sato et al. 1987, Hirai et al. 2007). SiByl has 2 vegetation layers (canopy and grass) and calculates heat, moisture, and momentum fluxes between the land surface and the atmosphere. The snow submodel SNOWA can have any number of snow layers and the maximum value is set to 8 for the CMIP5 experiments. Temperature, SWE, density, grain size and the aerosol deposition contents of each layer are predicted. The snow properties including the grain size are predicted due to snow metamorphism processes (Niwano et al., 2011), and the snow albedo is diagnosed from the aerosol mixing ratio, the snow properties and the temperature (Aoki et al., 2011). The soil submodel SOILA can also have any number of soil layers, and is composed of 14 soil layers in the CMIP5 experiments. The temperature of each layer is predicted by solving heat conduction equations. The soil moisture is predicted by solving the Darcy equation, in which hydraulic conductivity depends on the soil moisture. The land coupler LCUP is designed to enable the complicated constructions of the submidels. HAL can include some competing submodels (precise and detailed ones, and simpler ones), and they can run at the same simulations. LCUP enables a 2-step model validation, in which we compare the results of the detailed submodels with the in-situ observation directly at the 1st step, and follows the comparison between them and those of the simpler ones at the 2nd step. When the performances of the detailed ones are good, we can improve the simpler ones by using the detailed ones as reference models.
Enhancing the Referral-Making Process to 12-Step Programs: Strategies for Social Workers
ERIC Educational Resources Information Center
Dennis, Cory B.; Davis, Thomas D.
2017-01-01
Objectives: This study examines three preparatory strategies that can be used during treatment sessions to bridge the gap between clinician recommendations for client participation in 12-step programs (TSPs) and client adherence to such recommendations. Methods: A sample of 284 clinicians completed an online survey. Clinicians responded to items…
Preparing for High Technology: 30 Steps to Implementation. Research & Development Series No. 232.
ERIC Educational Resources Information Center
Abram, Robert; And Others
This planning guide is one of three that addresses the concerns of postsecondary college administrators and planners regarding the planning and implementation of technician training programs in high technology areas. It specifically focuses on a 30-step planning process that is generalizable to various high technology areas. (The other two…
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920
NASA Astrophysics Data System (ADS)
Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao
2018-01-01
For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.
Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696
Conformational sampling in template-free protein loop structure modeling: an overview.
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.
Conceptual-level workflow modeling of scientific experiments using NMR as a case study
Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R
2007-01-01
Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870
Conceptual-level workflow modeling of scientific experiments using NMR as a case study.
Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R
2007-01-30
Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.
Chen, Yumiao; Yang, Zhongliang
2017-01-01
Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.
Use of Intervention Mapping to Enhance Health Care Professional Practice: A Systematic Review.
Durks, Desire; Fernandez-Llimos, Fernando; Hossain, Lutfun N; Franco-Trigo, Lucia; Benrimoj, Shalom I; Sabater-Hernández, Daniel
2017-08-01
Intervention Mapping is a planning protocol for developing behavior change interventions, the first three steps of which are intended to establish the foundations and rationales of such interventions. This systematic review aimed to identify programs that used Intervention Mapping to plan changes in health care professional practice. Specifically, it provides an analysis of the information provided by the programs in the first three steps of the protocol to determine their foundations and rationales of change. A literature search was undertaken in PubMed, Scopus, SciELO, and DOAJ using "Intervention Mapping" as keyword. Key information was gathered, including theories used, determinants of practice, research methodologies, theory-based methods, and practical applications. Seventeen programs aimed at changing a range of health care practices were included. The social cognitive theory and the theory of planned behavior were the most frequently used frameworks in driving change within health care practices. Programs used a large variety of research methodologies to identify determinants of practice. Specific theory-based methods (e.g., modelling and active learning) and practical applications (e.g., health care professional training and facilitation) were reported to inform the development of practice change interventions and programs. In practice, Intervention Mapping delineates a three-step systematic, theory- and evidence-driven process for establishing the theoretical foundations and rationales underpinning change in health care professional practice. The use of Intervention Mapping can provide health care planners with useful guidelines for the theoretical development of practice change interventions and programs.
Exploring patient satisfaction predictors in relation to a theoretical model.
Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil
2013-01-01
The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p < 0.05). Step 1 (person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.
Proposed best modeling practices for assessing the effects of ecosystem restoration on fish
Rose, Kenneth A; Sable, Shaye; DeAngelis, Donald L.; Yurek, Simeon; Trexler, Joel C.; Graf, William L.; Reed, Denise J.
2015-01-01
Large-scale aquatic ecosystem restoration is increasing and is often controversial because of the economic costs involved, with the focus of the controversies gravitating to the modeling of fish responses. We present a scheme for best practices in selecting, implementing, interpreting, and reporting of fish modeling designed to assess the effects of restoration actions on fish populations and aquatic food webs. Previous best practice schemes that tended to be more general are summarized, and they form the foundation for our scheme that is specifically tailored for fish and restoration. We then present a 31-step scheme, with supporting text and narrative for each step, which goes from understanding how the results will be used through post-auditing to ensure the approach is used effectively in subsequent applications. We also describe 13 concepts that need to be considered in parallel to these best practice steps. Examples of these concepts include: life cycles and strategies; variability and uncertainty; nonequilibrium theory; biological, temporal, and spatial scaling; explicit versus implicit representation of processes; and model validation. These concepts are often not considered or not explicitly stated and casual treatment of them leads to mis-communication and mis-understandings, which in turn, often underlie the resulting controversies. We illustrate a subset of these steps, and their associated concepts, using the three case studies of Glen Canyon Dam on the Colorado River, the wetlands of coastal Louisiana, and the Everglades. Use of our proposed scheme will require investment of additional time and effort (and dollars) to be done effectively. We argue that such an investment is well worth it and will more than pay back in the long run in effective and efficient restoration actions and likely avoided controversies and legal proceedings.
Gillet, P; Rapaille, A; Benoît, A; Ceinos, M; Bertrand, O; de Bouyalsky, I; Govaerts, B; Lambermont, M
2015-01-01
Whole blood donation is generally safe although vasovagal reactions can occur (approximately 1%). Risk factors are well known and prevention measures are shown as efficient. This study evaluates the impact of the donor's retention in relation to the occurrence of vasovagal reaction for the first three blood donations. Our study of data collected over three years evaluated the impact of classical risk factors and provided a model including the best combination of covariates predicting VVR. The impact of a reaction at first donation on return rate and complication until the third donation was evaluated. Our data (523,471 donations) confirmed the classical risk factors (gender, age, donor status and relative blood volume). After stepwise variable selection, donor status, relative blood volume and their interaction were the only remaining covariates in the model. Of 33,279 first-time donors monitored over a period of at least 15 months, the first three donations were followed. Data emphasised the impact of complication at first donation. The return rate for a second donation was reduced and the risk of vasovagal reaction was increased at least until the third donation. First-time donation is a crucial step in the donors' career. Donors who experienced a reaction at their first donation have a lower return rate for a second donation and a higher risk of vasovagal reaction at least until the third donation. Prevention measures have to be processed to improve donor retention and provide blood banks with adequate blood supply. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven
2017-01-01
Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
Atmospheric flow over two-dimensional bluff surface obstructions
NASA Technical Reports Server (NTRS)
Bitte, J.; Frost, W.
1976-01-01
The phenomenon of atmospheric flow over a two-dimensional surface obstruction, such as a building (modeled as a rectangular block, a fence or a forward-facing step), is analyzed by three methods: (1) an inviscid free streamline approach, (2) a turbulent boundary layer approach using an eddy viscosity turbulence model and a horizontal pressure gradient determined by the inviscid model, and (3) an approach using the full Navier-Stokes equations with three turbulence models; i.e., an eddy viscosity model, a turbulence kinetic-energy model and a two-equation model with an additional transport equation for the turbulence length scale. A comparison of the performance of the different turbulence models is given, indicating that only the two-equation model adequately accounts for the convective character of turbulence. Turbulence flow property predictions obtained from the turbulence kinetic-energy model with prescribed length scale are only insignificantly better than those obtained from the eddy viscosity model. A parametric study includes the effects of the variation of the characteristics parameters of the assumed logarithmic approach velocity profile. For the case of the forward-facing step, it is shown that in the downstream flow region an increase of the surface roughness gives rise to higher turbulence levels in the shear layer originating from the step corner.
The Rhetorical Cycle: Reading, Thinking, Speaking, Listening, Discussing, Writing.
ERIC Educational Resources Information Center
Keller, Rodney D.
The rhetorical cycle is a step-by-step approach that provides classroom experience before students actually write, thereby making the writing process less frustrating for them. This approach consists of six sequential steps: reading, thinking, speaking, listening, discussing, and finally writing. Readings serve not only as models of rhetorical…
Effects of Learning Support in Simulation-Based Physics Learning
ERIC Educational Resources Information Center
Chang, Kuo-En; Chen, Yu-Lung; Lin, He-Yan; Sung, Yao-Ting
2008-01-01
This paper describes the effects of learning support on simulation-based learning in three learning models: experiment prompting, a hypothesis menu, and step guidance. A simulation learning system was implemented based on these three models, and the differences between simulation-based learning and traditional laboratory learning were explored in…
Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas
2018-02-01
Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.
From fatalism to resilience: reducing disaster impacts through systematic investments.
Hill, Harvey; Wiener, John; Warner, Koko
2012-04-01
This paper describes a method for reducing the economic risks associated with predictable natural hazards by enhancing the resilience of national infrastructure systems. The three-step generalised framework is described along with examples. Step one establishes economic baseline growth without the disaster impact. Step two characterises economic growth constrained by a disaster. Step three assesses the economy's resilience to the disaster event when it is buffered by alternative resiliency investments. The successful outcome of step three is a disaster-resistant core of infrastructure systems and social capacity more able to maintain the national economy and development post disaster. In addition, the paper considers ways to achieve this goal in data-limited environments. The method provides a methodology to address this challenge via the integration of physical and social data of different spatial scales into macroeconomic models. This supports the disaster risk reduction objectives of governments, donor agencies, and the United Nations International Strategy for Disaster Reduction. © 2012 The Author(s). Disasters © Overseas Development Institute, 2012.
Co-delivery of ibuprofen and gentamicin from nanoporous anodic titanium dioxide layers.
Pawlik, Anna; Jarosz, Magdalena; Syrek, Karolina; Sulka, Grzegorz D
2017-04-01
Although single-drug therapy may prove insufficient in treating bacterial infections or inflammation after orthopaedic surgeries, complex therapy (using both an antibiotic and an anti-inflammatory drug) is thought to address the problem. Among drug delivery systems (DDSs) with prolonged drug release profiles, nanoporous anodic titanium dioxide (ATO) layers on Ti foil are very promising. In the discussed research, ATO samples were synthesized via a three-step anodization process in an ethylene glycol-based electrolyte with fluoride ions. The third step lasted 2, 5 and 10min in order to obtain different thicknesses of nanoporous layers. Annealing the as-prepared amorphous layers at the temperature of 400°C led to obtaining the anatase phase. In this study, water-insoluble ibuprofen and water-soluble gentamicin were used as model drugs. Three different drug loading procedures were applied. The desorption-desorption-diffusion (DDD) model of the drug release was fitted to the experimental data. The effects of crystalline structure, depth of TiO 2 nanopores and loading procedure on the drug release profiles were examined. The duration of the drug release process can be easily altered by changing the drug loading sequence. Water-soluble gentamicin is released for a long period of time if gentamicin is loaded in ATO as the first drug. Additionally, deeper nanopores and anatase phase suppress the initial burst release of drugs. These results confirm that factors such as morphological and crystalline structure of ATO layers, and the procedure of drug loading inside nanopores, allow to alter the drug release performance of nanoporous ATO layers. Copyright © 2017 Elsevier B.V. All rights reserved.
Validation of a multi-criteria evaluation model for animal welfare.
Martín, P; Czycholl, I; Buxadé, C; Krieter, J
2017-04-01
The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.
1952-08-01
28 NACA TN 2762 ( a ) Langley tank model 221E. a = 2°. (b) Langley tank model 221G . a = 2°. ( c ) Langley tank model 221F. a = k<: Figure 13...coefficient based on maximum cross-sectional area A A of hull (Drag/qA) CDy drag coefficien"t based on surface area W of hull (Drag/qW) C lateral-force... 221G , and 221F were drawn by the Langley Hydrodynamics Division by increasing the step of hull 221B of reference 1 from a depth which was 23
Future aircraft networks and schedules
NASA Astrophysics Data System (ADS)
Shu, Yan
2011-07-01
Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.
NASA Technical Reports Server (NTRS)
Parkinson, J B; HOUSE R O
1938-01-01
Tests were made in the NACA tank and in the NACA 7 by 10 foot wind tunnel on two models of transverse step floats and three models of pointed step floats considered to be suitable for use with single float seaplanes. The object of the program was the reduction of water resistance and spray of single float seaplanes without reducing the angle of dead rise believed to be necessary for the satisfactory absorption of the shock loads. The results indicated that all the models have less resistance and spray than the model of the Mark V float and that the pointed step floats are somewhat superior to the transverse step floats in these respects. Models 41-D, 61-A, and 73 were tested by the general method over a wide range of loads and speeds. The results are presented in the form of curves and charts for use in design calculations.
Subsampling for dataset optimisation
NASA Astrophysics Data System (ADS)
Ließ, Mareike
2017-04-01
Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.
Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics.
Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone
2016-10-05
Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics.
Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics
Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone
2016-01-01
Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics. PMID:27703141
A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.
Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R
2018-01-01
There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.
Disparities in the diagnostic process of Duchenne and Becker muscular dystrophy.
Holtzer, Caleb; Meaney, F John; Andrews, Jennifer; Ciafaloni, Emma; Fox, Deborah J; James, Katherine A; Lu, Zhenqiang; Miller, Lisa; Pandya, Shree; Ouyang, Lijing; Cunniff, Christopher
2011-11-01
To determine whether sociodemographic factors are associated with delays at specific steps in the diagnostic process of Duchenne and Becker muscular dystrophy. We examined abstracted medical records for 540 males from population-based surveillance sites in Arizona, Colorado, Georgia, Iowa, and western New York. We used linear regressions to model the association of three sociodemographic characteristics with age at initial medical evaluation, first creatine kinase measurement, and earliest DNA analysis while controlling for changes in the diagnostic process over time. The analytical dataset included 375 males with information on family history of Duchenne and Becker muscular dystrophy, neighborhood poverty levels, and race/ethnicity. Black and Hispanic race/ethnicity predicted older ages at initial evaluation, creatine kinase measurement, and DNA testing (P < 0.05). A positive family history of Duchenne and Becker muscular dystrophy predicted younger ages at initial evaluation, creatine kinase measurement and DNA testing (P < 0.001). Higher neighborhood poverty was associated with earlier ages of evaluation (P < 0.05). Racial and ethnic disparities in the diagnostic process for Duchenne and Becker muscular dystrophy are evident even after adjustment for family history of Duchenne and Becker muscular dystrophy and changes in the diagnostic process over time. Black and Hispanic children are initially evaluated at older ages than white children, and the gap widens at later steps in the diagnostic process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John McCord
2006-06-01
The U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) initiated the Underground Test Area (UGTA) Project to assess and evaluate the effects of the underground nuclear weapons tests on groundwater beneath the Nevada Test Site (NTS) and vicinity. The framework for this evaluation is provided in Appendix VI, Revision No. 1 (December 7, 2000) of the Federal Facility Agreement and Consent Order (FFACO, 1996). Section 3.0 of Appendix VI ''Corrective Action Strategy'' of the FFACO describes the process that will be used to complete corrective actions specifically for the UGTA Project. The objective of themore » UGTA corrective action strategy is to define contaminant boundaries for each UGTA corrective action unit (CAU) where groundwater may have become contaminated from the underground nuclear weapons tests. The contaminant boundaries are determined based on modeling of groundwater flow and contaminant transport. A summary of the FFACO corrective action process and the UGTA corrective action strategy is provided in Section 1.5. The FFACO (1996) corrective action process for the Yucca Flat/Climax Mine CAU 97 was initiated with the Corrective Action Investigation Plan (CAIP) (DOE/NV, 2000a). The CAIP included a review of existing data on the CAU and proposed a set of data collection activities to collect additional characterization data. These recommendations were based on a value of information analysis (VOIA) (IT, 1999), which evaluated the value of different possible data collection activities, with respect to reduction in uncertainty of the contaminant boundary, through simplified transport modeling. The Yucca Flat/Climax Mine CAIP identifies a three-step model development process to evaluate the impact of underground nuclear testing on groundwater to determine a contaminant boundary (DOE/NV, 2000a). The three steps are as follows: (1) Data compilation and analysis that provides the necessary modeling data that is completed in two parts: the first addressing the groundwater flow model, and the second the transport model. (2) Development of a groundwater flow model. (3) Development of a groundwater transport model. This report presents the results of the first part of the first step, documenting the data compilation, evaluation, and analysis for the groundwater flow model. The second part, documentation of transport model data will be the subject of a separate report. The purpose of this document is to present the compilation and evaluation of the available hydrologic data and information relevant to the development of the Yucca Flat/Climax Mine CAU groundwater flow model, which is a fundamental tool in the prediction of the extent of contaminant migration. Where appropriate, data and information documented elsewhere are summarized with reference to the complete documentation. The specific task objectives for hydrologic data documentation are as follows: (1) Identify and compile available hydrologic data and supporting information required to develop and validate the groundwater flow model for the Yucca Flat/Climax Mine CAU. (2) Assess the quality of the data and associated documentation, and assign qualifiers to denote levels of quality. (3) Analyze the data to derive expected values or spatial distributions and estimates of the associated uncertainty and variability.« less
Volk, Robert J; Shokar, Navkiran K; Leal, Viola B; Bulik, Robert J; Linder, Suzanne K; Mullen, Patricia Dolan; Wexler, Richard M; Shokar, Gurjeet S
2014-11-01
Although research suggests that patients prefer a shared decision making (SDM) experience when making healthcare decisions, clinicians do not routinely implement SDM into their practice and training programs are needed. Using a novel case-based strategy, we developed and pilot tested an online educational program to promote shared decision making (SDM) by primary care clinicians. A three-phased approach was used: 1) development of a conceptual model of the SDM process; 2) development of an online teaching case utilizing the Design A Case (DAC) authoring template, a well-tested process used to create peer-reviewed web-based clinical cases across all levels of healthcare training; and 3) pilot testing of the case. Participants were clinician members affiliated with several primary care research networks across the United States who answered an invitation email. The case used prostate cancer screening as the clinical context and was delivered online. Post-intervention ratings of clinicians' general knowledge of SDM, knowledge of specific SDM steps, confidence in and intention to perform SDM steps were also collected online. Seventy-nine clinicians initially volunteered to participate in the study, of which 49 completed the case and provided evaluations. Forty-three clinicians (87.8%) reported the case met all the learning objectives, and 47 (95.9%) indicated the case was relevant for other equipoise decisions. Thirty-one clinicians (63.3%) accessed supplementary information via links provided in the case. After viewing the case, knowledge of SDM was high (over 90% correctly identified the steps in a SDM process). Determining a patient's preferred role in making the decision (62.5% very confident) and exploring a patient's values (65.3% very confident) about the decisions were areas where clinician confidence was lowest. More than 70% of the clinicians intended to perform SDM in the future. A comprehensive model of the SDM process was used to design a case-based approach to teaching SDM skills to primary care clinicians. The case was favorably rated in this pilot study. Clinician skills training for helping patients clarify their values and for assessing patients' desire for involvement in decision making remain significant challenges and should be a focus of future comparative studies.
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
Implementing the Indiana Model. Indiana Leadership Consortium: Equity through Change.
ERIC Educational Resources Information Center
Indiana Leadership Consortium.
This guide, which was developed as a part of a multi-year, statewide effort to institutionalize gender equity in various educational settings throughout Indiana, presents a step-by-step process model for achieving gender equity in the state's secondary- and postsecondary-level vocational programs through coalition building and implementation of a…
The Costs and Potential Benefits of Alternative Scholarly Publishing Models
ERIC Educational Resources Information Center
Houghton, John W.
2011-01-01
Introduction: This paper reports on a study undertaken for the UK Joint Information Systems Committee (JISC), which explored the economic implications of alternative scholarly publishing models. Rather than simply summarising the study's findings, this paper focuses on the approach and presents a step-by-step account of the research process,…
Accelerating Drug Development: Antiviral Therapies for Emerging Viruses as a Model.
Everts, Maaike; Cihlar, Tomas; Bostwick, J Robert; Whitley, Richard J
2017-01-06
Drug discovery and development is a lengthy and expensive process. Although no one, simple, single solution can significantly accelerate this process, steps can be taken to avoid unnecessary delays. Using the development of antiviral therapies as a model, we describe options for acceleration that cover target selection, assay development and high-throughput screening, hit confirmation, lead identification and development, animal model evaluations, toxicity studies, regulatory issues, and the general drug discovery and development infrastructure. Together, these steps could result in accelerated timelines for bringing antiviral therapies to market so they can treat emerging infections and reduce human suffering.
Friedel, M.J.; Asch, T.H.; Oden, C.
2012-01-01
The remediation of land containing munitions and explosives of concern, otherwise known as unexploded ordnance, is an ongoing problem facing the U.S. Department of Defense and similar agencies worldwide that have used or are transferring training ranges or munitions disposal areas to civilian control. The expense associated with cleanup of land previously used for military training and war provides impetus for research towards enhanced discrimination of buried unexploded ordnance. Towards reducing that expense, a multiaxis electromagnetic induction data collection and software system, called ALLTEM, was designed and tested with support from the U.S. Department of Defense Environmental Security Technology Certification Program. ALLTEM is an on-time time-domain system that uses a continuous triangle-wave excitation to measure the target-step response rather than traditional impulse response. The system cycles through three orthogonal transmitting loops and records a total of 19 different transmitting and receiving loop combinations with a nominal spatial data sampling interval of 20 cm. Recorded data are pre-processed and then used in a hybrid discrimination scheme involving both data-driven and numerical classification techniques. The data-driven classification scheme is accomplished in three steps. First, field observations are used to train a type of unsupervised artificial neural network, a self-organizing map (SOM). Second, the SOM is used to simultaneously estimate target parameters (depth, azimuth, inclination, item type and weight) by iterative minimization of the topographic error vectors. Third, the target classification is accomplished by evaluating histograms of the estimated parameters. The numerical classification scheme is also accomplished in three steps. First, the Biot–Savart law is used to model the primary magnetic fields from the transmitter coils and the secondary magnetic fields generated by currents induced in the target materials in the ground. Second, the target response is modelled by three orthogonal dipoles from prolate, oblate and triaxial ellipsoids with one long axis and two shorter axes. Each target consists of all three dipoles. Third, unknown target parameters are determined by comparing modelled to measured target responses. By comparing the rms error among the self-organizing map and numerical classification results, we achieved greater than 95 per cent detection and correct classification of the munitions and explosives of concern at the direct fire and indirect fire test areas at the UXO Standardized Test Site at the Aberdeen Proving Ground, Maryland in 2010.
NASA Astrophysics Data System (ADS)
Friedel, M. J.; Asch, T. H.; Oden, C.
2012-08-01
The remediation of land containing munitions and explosives of concern, otherwise known as unexploded ordnance, is an ongoing problem facing the U.S. Department of Defense and similar agencies worldwide that have used or are transferring training ranges or munitions disposal areas to civilian control. The expense associated with cleanup of land previously used for military training and war provides impetus for research towards enhanced discrimination of buried unexploded ordnance. Towards reducing that expense, a multiaxis electromagnetic induction data collection and software system, called ALLTEM, was designed and tested with support from the U.S. Department of Defense Environmental Security Technology Certification Program. ALLTEM is an on-time time-domain system that uses a continuous triangle-wave excitation to measure the target-step response rather than traditional impulse response. The system cycles through three orthogonal transmitting loops and records a total of 19 different transmitting and receiving loop combinations with a nominal spatial data sampling interval of 20 cm. Recorded data are pre-processed and then used in a hybrid discrimination scheme involving both data-driven and numerical classification techniques. The data-driven classification scheme is accomplished in three steps. First, field observations are used to train a type of unsupervised artificial neural network, a self-organizing map (SOM). Second, the SOM is used to simultaneously estimate target parameters (depth, azimuth, inclination, item type and weight) by iterative minimization of the topographic error vectors. Third, the target classification is accomplished by evaluating histograms of the estimated parameters. The numerical classification scheme is also accomplished in three steps. First, the Biot-Savart law is used to model the primary magnetic fields from the transmitter coils and the secondary magnetic fields generated by currents induced in the target materials in the ground. Second, the target response is modelled by three orthogonal dipoles from prolate, oblate and triaxial ellipsoids with one long axis and two shorter axes. Each target consists of all three dipoles. Third, unknown target parameters are determined by comparing modelled to measured target responses. By comparing the rms error among the self-organizing map and numerical classification results, we achieved greater than 95 per cent detection and correct classification of the munitions and explosives of concern at the direct fire and indirect fire test areas at the UXO Standardized Test Site at the Aberdeen Proving Ground, Maryland in 2010.
NASA Astrophysics Data System (ADS)
Trottier, Olivier; Ganguly, Sujoy; Bowne-Anderson, Hugo; Liang, Xin; Howard, Jonathon
For the last 120 years, the development of neuronal shapes has been of great interest to the scientific community. Over the last 30 years, significant work has been done on the molecular processes responsible for dendritic development. In our ongoing research, we use the class IV sensory neurons of the Drosophila melanogaster larva as a model system to understand the growth of dendritic arbors. Our main goal is to elucidate the mechanisms that the neuron uses to determine the shape of its dendritic tree. We have observed the development of the class IV neuron's dendritic tree in the larval stage and have concluded that morphogenesis is defined by 3 distinct processes: 1) branch growth, 2) branching and 3) branch retraction. As the first step towards understanding dendritic growth, we have implemented these three processes in a computational model. Our simulations are able to reproduce the branch length distribution, number of branches and fractal dimension of the class IV neurons for a small range of parameters.
Effective Swimmer’s Action during the Grab Start Technique
Mourão, Luis; de Jesus, Karla; Roesler, Hélio; Machado, Leandro J.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo; Vaz, Mário A. P.
2015-01-01
The external forces applied in swimming starts have been often studied, but using direct analysis and simple interpretation data processes. This study aimed to develop a tool for vertical and horizontal force assessment based on the swimmers’ propulsive and structural forces (passive forces due to dead weight) applied during the block phase. Four methodological pathways were followed: the experimented fall of a rigid body, the swimmers’ inertia effect, the development of a mathematical model to describe the outcome of the rigid body fall and its generalization to include the effects of the inertia, and the experimental swimmers’ starting protocol analysed with the inclusion of the developed mathematical tool. The first three methodological steps resulted in the description and computation of the passive force components. At the fourth step, six well-trained swimmers performed three 15 m maximal grab start trials and three-dimensional (3D) kinetic data were obtained using a six degrees of freedom force plate. The passive force contribution to the start performance obtained from the model was subtracted from the experimental force due to the swimmers resulting in the swimmers’ active forces. As expected, the swimmers’ vertical and horizontal active forces accounted for the maximum variability contribution of the experimental forces. It was found that the active force profile for the vertical and horizontal components resembled one another. These findings should be considered in clarifying the active swimmers’ force variability and the respective geometrical profile as indicators to redefine steering strategies. PMID:25978370
Fabrication of lightweight ceramic mirrors by means of a chemical vapor deposition process
NASA Technical Reports Server (NTRS)
Goela, Jitendra S. (Inventor); Taylor, Raymond L. (Inventor)
1991-01-01
A process to fabricate lightweigth ceramic mirrors, and in particular, silicon/silicon carbide mirrors, involves three chemical vapor deposition steps: one to produce the mirror faceplate, the second to form the lightweight backstructure which is deposited integral to the faceplate, and the third and final step which results in the deposition of a layer of optical grade material, for example, silicon, onto the front surface of the faceplate. The mirror figure and finish are fabricated into this latter material.
NASA Astrophysics Data System (ADS)
Yao, Jianzhuang; Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo
2016-02-01
Extensive computational modeling and simulations have been carried out, in the present study, to uncover the fundamental reaction pathway for butyrylcholinesterase (BChE)-catalyzed hydrolysis of ghrelin, demonstrating that the acylation process of BChE-catalyzed hydrolysis of ghrelin follows an unprecedented single-step reaction pathway and the single-step acylation process is rate-determining. The free energy barrier (18.8 kcal/mol) calculated for the rate-determining step is reasonably close to the experimentally-derived free energy barrier (~19.4 kcal/mol), suggesting that the obtained mechanistic insights are reasonable. The single-step reaction pathway for the acylation is remarkably different from the well-known two-step acylation reaction pathway for numerous ester hydrolysis reactions catalyzed by a serine esterase. This is the first time demonstrating that a single-step reaction pathway is possible for an ester hydrolysis reaction catalyzed by a serine esterase and, therefore, one no longer can simply assume that the acylation process must follow the well-known two-step reaction pathway.
Verification of kinetic schemes of hydrogen ignition and combustion in air
NASA Astrophysics Data System (ADS)
Fedorov, A. V.; Fedorova, N. N.; Vankova, O. S.; Tropin, D. A.
2018-03-01
Three chemical kinetic models for hydrogen combustion in oxygen and three gas-dynamic models for reactive mixture flow behind the initiating SW front were analyzed. The calculated results were compared with experimental data on the dependences of the ignition delay on the temperature and the dilution of the mixture with argon or nitrogen. Based on detailed kinetic mechanisms of nonequilibrium chemical transformations, a mathematical technique for describing the ignition and combustion of hydrogen in air was developed using the ANSYS Fluent code. The problem of ignition of a hydrogen jet fed coaxially into supersonic flow was solved numerically. The calculations were carried out using the Favre-averaged Navier-Stokes equations for a multi-species gas taking into account chemical reactions combined with the k-ω SST turbulence model. The problem was solved in several steps. In the first step, verification of the calculated and experimental data for the three kinetic schemes was performed without considering the conicity of the flow. In the second step, parametric calculations were performed to determine the influence of the conicity of the flow on the mixing and ignition of hydrogen in air using a kinetic scheme consisting of 38 reactions. Three conical supersonic nozzles for a Mach number M = 2 with different expansion angles β = 4°, 4.5°, and 5° were considered.
Creation of system of computer-aided design for technological objects
NASA Astrophysics Data System (ADS)
Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.
2018-05-01
Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.
Zhao, Wenle; Pauls, Keith
2016-04-01
Centralized outcome adjudication has been used widely in multicenter clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network's data management center within a homegrown clinical trial management system. In this article, the system design strategy and database structure are presented. A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer 1 or 2 days. A total of 7336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
An Emotional ANN (EANN) approach to modeling rainfall-runoff process
NASA Astrophysics Data System (ADS)
Nourani, Vahid
2017-01-01
This paper presents the first hydrological implementation of Emotional Artificial Neural Network (EANN), as a new generation of Artificial Intelligence-based models for daily rainfall-runoff (r-r) modeling of the watersheds. Inspired by neurophysiological form of brain, in addition to conventional weights and bias, an EANN includes simulated emotional parameters aimed at improving the network learning process. EANN trained by a modified version of back-propagation (BP) algorithm was applied to single and multi-step-ahead runoff forecasting of two watersheds with two distinct climatic conditions. Also to evaluate the ability of EANN trained by smaller training data set, three data division strategies with different number of training samples were considered for the training purpose. The overall comparison of the obtained results of the r-r modeling indicates that the EANN could outperform the conventional feed forward neural network (FFNN) model up to 13% and 34% in terms of training and verification efficiency criteria, respectively. The superiority of EANN over classic ANN is due to its ability to recognize and distinguish dry (rainless days) and wet (rainy days) situations using hormonal parameters of the artificial emotional system.
Counter-current acid leaching process for copper azole treated wood waste.
Janin, Amélie; Riche, Pauline; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Morris, Paul
2012-09-01
This study explores the performance of a counter-current leaching process (CCLP) for copper extraction from copper azole treated wood waste for recycling of wood and copper. The leaching process uses three acid leaching steps with 0.1 M H2SO4 at 75degrees C and 15% slurry density followed by three rinses with water. Copper is recovered from the leachate using electrodeposition at 5 amperes (A) for 75 min. Ten counter-current remediation cycles were completed achieving > or = 94% copper extraction from the wood during the 10 cycles; 80-90% of the copper was recovered from the extract solution by electrodeposition. The counter-current leaching process reduced acid consumption by 86% and effluent discharge volume was 12 times lower compared with the same process without use of counter-current leaching. However, the reuse of leachates from one leaching step to another released dissolved organic carbon and caused its build-up in the early cycles.
Role of excited N2 in the production of nitric oxide
NASA Astrophysics Data System (ADS)
Campbell, L.; Cartwright, D. C.; Brunger, M. J.
2007-08-01
Excited N2 plays a role in a number of atmospheric processes, including auroral and dayglow emissions, chemical reactions, recombination of free electrons, and the production of nitric oxide. Electron impact excitation of N2 is followed by radiative decay through a series of excited states, contributing to auroral and dayglow emissions. These processes are intertwined with various chemical reactions and collisional quenching involving the excited and ground state vibrational levels. Statistical equilibrium and time step atmospheric models are used to predict N2 excited state densities and emissions (as a test against previous models and measurements) and to investigate the role of excited nitrogen in the production of nitric oxide. These calculations predict that inclusion of the reaction N2[A3Σu +] + O, to generate NO, produces an increase by a factor of up to three in the calculated NO density at some altitudes.
Air emissions of ammonia and methane from livestock operations: valuation and policy options.
Shih, Jhih-Shyang; Burtraw, Dallas; Palmer, Karen; Siikamäki, Juha
2008-09-01
The animal husbandry industry is a major emitter of ammonia (NH3), which is a precursor of fine particulate matter (PM2.5)--arguably, the number-one environment-related public health threat facing the nation. The industry is also a major emitter of methane (CH4), which is an important greenhouse gas (GHG). We present an integrated process model of the engineering economics of technologies to reduce NH3 and CH4 emissions at dairy operations in California. Three policy options are explored: PM offset credits for NH3 control, GHG offset credits for CH4 control, and expanded net metering policies to provide revenue for the sale of electricity generated from captured methane (CH4) gas. Individually these policies vary substantially in the economic incentives they provide for farm operators to reduce emissions. We report on initial steps to fully develop the integrated process model that will provide guidance for policy-makers.
Control of DNA strand displacement kinetics using toehold exchange.
Zhang, David Yu; Winfree, Erik
2009-12-02
DNA is increasingly being used as the engineering material of choice for the construction of nanoscale circuits, structures, and motors. Many of these enzyme-free constructions function by DNA strand displacement reactions. The kinetics of strand displacement can be modulated by toeholds, short single-stranded segments of DNA that colocalize reactant DNA molecules. Recently, the toehold exchange process was introduced as a method for designing fast and reversible strand displacement reactions. Here, we characterize the kinetics of DNA toehold exchange and model it as a three-step process. This model is simple and quantitatively predicts the kinetics of 85 different strand displacement reactions from the DNA sequences. Furthermore, we use toehold exchange to construct a simple catalytic reaction. This work improves the understanding of the kinetics of nucleic acid reactions and will be useful in the rational design of dynamic DNA and RNA circuits and nanodevices.
Optical observations of electrical activity in cloud discharges
NASA Astrophysics Data System (ADS)
Vayanganie, S. P. A.; Fernando, M.; Sonnadara, U.; Cooray, V.; Perera, C.
2018-07-01
Temporal variation of the luminosity of seven natural cloud-to-cloud lightning channels were studied, and results were presented. They were recorded by using a high-speed video camera with the speed of 5000 fps (frames per second) and the pixel resolution of 512 × 512 in three locations in Sri Lanka in the tropics. Luminosity variation of the channel with time was obtained by analyzing the image sequences. Recorded video frames together with the luminosity variation were studied to understand the cloud discharge process. Image analysis techniques also used to understand the characteristics of channels. Cloud flashes show more luminosity variability than ground flashes. Most of the time it starts with a leader which do not have stepping process. Channel width and standard deviation of intensity variation across the channel for each cloud flashes was obtained. Brightness variation across the channel shows a Gaussian distribution. The average time duration of the cloud flashes which start with non stepped leader was 180.83 ms. Identified characteristics are matched with the existing models to understand the process of cloud flashes. The fact that cloud discharges are not confined to a single process have been further confirmed from this study. The observations show that cloud flash is a basic lightning discharge which transfers charge between two charge centers without using one specific mechanism.
Spatial Data Integration Using Ontology-Based Approach
NASA Astrophysics Data System (ADS)
Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.
2015-12-01
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Darzi, Andrea; Abou-Jaoude, Elias A; Agarwal, Arnav; Lakis, Chantal; Wiercioch, Wojtek; Santesso, Nancy; Brax, Hneine; El-Jardali, Fadi; Schünemann, Holger J; Akl, Elie A
2017-06-01
Our objective was to identify and describe published frameworks for adaptation of clinical, public health, and health services guidelines. We included reports describing methods of adaptation of guidelines in sufficient detail to allow its reproducibility. We searched Medline and EMBASE databases. We also searched personal files, as well manuals and handbooks of organizations and professional societies that proposed methods of adaptation and adoption of guidelines. We followed standard systematic review methodology. Our search captured 12,021 citations, out of which we identified eight proposed methods of guidelines adaptation: ADAPTE, Adapted ADAPTE, Alberta Ambassador Program adaptation phase, GRADE-ADOLOPMENT, MAGIC, RAPADAPTE, Royal College of Nursing (RCN), and Systematic Guideline Review (SGR). The ADAPTE framework consists of a 24-step process to adapt guidelines to a local context taking into consideration the needs, priorities, legislation, policies, and resources. The Alexandria Center for Evidence-Based Clinical Practice Guidelines updated one of ADAPTE's tools, modified three tools, and added three new ones. In addition, they proposed optionally using three other tools. The Alberta Ambassador Program adaptation phase consists of 11 steps and focused on adapting good-quality guidelines for nonspecific low back pain into local context. GRADE-ADOLOPMENT is an eight-step process based on the GRADE Working Group's Evidence to Decision frameworks and applied in 22 guidelines in the context of national guideline development program. The MAGIC research program developed a five-step adaptation process, informed by ADAPTE and the GRADE approach in the context of adapting thrombosis guidelines. The RAPADAPTE framework consists of 12 steps based on ADAPTE and using synthesized evidence databases, retrospectively derived from the experience of producing a high-quality guideline for the treatment of breast cancer with limited resources in Costa Rica. The RCN outlines five key steps strategy for adaptation of guidelines to the local context. The SGR method consists of nine steps and takes into consideration both methodological gaps and context-specific normative issues in source guidelines. We identified through searching personal files two abandoned methods. We identified and described eight proposed frameworks for the adaptation of health-related guidelines. There is a need to evaluate these different frameworks to assess rigor, efficiency, and transparency of their proposed processes. Copyright © 2017 Elsevier Inc. All rights reserved.
Multistep Model of Cervical Cancer: Participation of miRNAs and Coding Genes
López, Angelica Judith Granados; López, Jesús Adrián
2014-01-01
Aberrant miRNA expression is well recognized as an important step in the development of cancer. Close to 70 microRNAs (miRNAs) have been implicated in cervical cancer up to now, nevertheless it is unknown if aberrant miRNA expression causes the onset of cervical cancer. One of the best ways to address this issue is through a multistep model of carcinogenesis. In the progression of cervical cancer there are three well-established steps to reach cancer that we used in the model proposed here. The first step of the model comprises the gene changes that occur in normal cells to be transformed into immortal cells (CIN 1), the second comprises immortal cell changes to tumorigenic cells (CIN 2), the third step includes cell changes to increase tumorigenic capacity (CIN 3), and the final step covers tumorigenic changes to carcinogenic cells. Altered miRNAs and their target genes are located in each one of the four steps of the multistep model of carcinogenesis. miRNA expression has shown discrepancies in different works; therefore, in this model we include miRNAs recording similar results in at least two studies. The present model is a useful insight into studying potential prognostic, diagnostic, and therapeutic miRNAs. PMID:25192291
Simulation of dynamic processes when machining transition surfaces of stepped shafts
NASA Astrophysics Data System (ADS)
Maksarov, V. V.; Krasnyy, V. A.; Viushin, R. V.
2018-03-01
The paper addresses the characteristics of stepped surfaces of parts categorized as "solids of revolution". It is noted that in the conditions of transition modes during the switch to end surface machining, there is cutting with varied load intensity in the section of the cut layer, which leads to change in cutting force, onset of vibrations, an increase in surface layer roughness, a decrease of size precision, and increased wear of a tool's cutting edge. This work proposes a method that consists in developing a CNC program output code that allows one to process complex forms of stepped shafts with only one machine setup. The authors developed and justified a mathematical model of a technological system for mechanical processing with consideration for the resolution of tool movement at the stages of transition processes to assess the dynamical stability of a system in the process of manufacturing stepped surfaces of parts of “solid of revolution” type.
Integrated socio-environmental modelling: A test case in coastal Bangladesh
NASA Astrophysics Data System (ADS)
Lazar, Attila
2013-04-01
Delta regions are vulnerable with their populations and ecosystems facing multiple threats in the coming decades through extremes of poverty, environmental and ecological stress and land degradation. External and internal processes initiate these threats/changes and results in for example water quality and health risk issues, declining agricultural productivity and sediment starvation all of which directly affecting the local population. The ESPA funded "Assessing Health, Livelihoods, Ecosystem Services and Poverty Alleviation In Populous Deltas" project (2012-16) aims to provide policy makers with the knowledge and tools to enable them to evaluate the effects of policy decisions on people's livelihoods. It considers coastal Bangladesh in the Ganges-Brahmaputra-Meghna Delta: one of the world's most dynamic and significant deltas. This is being done by a multidisciplinary and multinational team of policy analysts, social and natural scientists and engineers using a participatory, holistic approach to formally evaluate ecosystem services and poverty in the context of the wide range of changes that are occurring. An integrated model with relevant feedbacks is being developed to explore options for management strategies and policy formulation for ecosystem services, livelihoods and health in coastal Bangladesh. This requires the continuous engagement with stakeholders through the following steps: (1) system characterisation, (2) research question definition, (3) data and model identification, (4) model validation and (5) model application. This presentation will focus on the first three steps. Field-based social science and governance related research are on the way. The bio-physical models have been selected and some are already set up for the study area. These allow preliminary conceptualisation of the elements and linkages of the deltaic socio-environmental system and thus the preliminary structure of the integrated model. This presentation describes these steps though the coastal Bangladesh test case.
ERIC Educational Resources Information Center
Churchman, Kris
2002-01-01
Explains how students can be guided to model the invention process using potatoes. Details the steps and the materials used in the modeling, including the phases of the invention process. Presents this activity as preparation for the Invent America program. (DDR)
A Transportation Modeling Primer
DOT National Transportation Integrated Search
2006-06-01
This primer is intended to explain the urban transportation modeling process works, the assumptions made and the steps used to forecast travel demand for urban transportation planning. This is done in order to help to understand the process and its i...
Cognitive mapping tools: review and risk management needs.
Wood, Matthew D; Bostrom, Ann; Bridges, Todd; Linkov, Igor
2012-08-01
Risk managers are increasingly interested in incorporating stakeholder beliefs and other human factors into the planning process. Effective risk assessment and management requires understanding perceptions and beliefs of involved stakeholders, and how these beliefs give rise to actions that influence risk management decisions. Formal analyses of risk manager and stakeholder cognitions represent an important first step. Techniques for diagramming stakeholder mental models provide one tool for risk managers to better understand stakeholder beliefs and perceptions concerning risk, and to leverage this new understanding in developing risk management strategies. This article reviews three methodologies for assessing and diagramming stakeholder mental models--decision-analysis-based mental modeling, concept mapping, and semantic web analysis--and assesses them with regard to their ability to address risk manager needs. © 2012 Society for Risk Analysis.
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
The Architecture of Chemical Alternatives Assessment.
Geiser, Kenneth; Tickner, Joel; Edwards, Sally; Rossi, Mark
2015-12-01
Chemical alternatives assessment is a method rapidly developing for use by businesses, governments, and nongovernment organizations seeking to substitute chemicals of concern in production processes and products. Chemical alternatives assessment is defined as a process for identifying, comparing, and selecting safer alternatives to chemicals of concern (including those in materials, processes, or technologies) on the basis of their hazards, performance, and economic viability. The process is intended to provide guidance for assuring that chemicals of concern are replaced with safer alternatives that are not likely to be later regretted. Conceptually, the assessment methods are developed from a set of three foundational pillars and five common principles. Based on a number of emerging alternatives assessment initiatives, in this commentary, we outline a chemical alternatives assessment blueprint structured around three broad steps: Scope, Assessment, and Selection and Implementation. Specific tasks and tools are identified for each of these three steps. While it is recognized that on-going practice will further refine and develop the method and tools, it is important that the structure of the assessment process remain flexible, adaptive, and focused on the substitution of chemicals of concern with safer alternatives. © 2015 Society for Risk Analysis.
[Purification of arsenic-binding proteins in hamster plasma after oral administration of arsenite].
Wang, Wenwen; Zhang, Min; Li, Chunhui; Qin, Yingjie; Hua, Naranmandura
2013-01-01
To purify the arsenic-binding proteins (As-BP) in hamster plasma after a single oral administration of arsenite (iAs(III)). Arsenite was given to hamsters in a single dose. Three types of HPLC columns, size exclusion, gel filtration and anion exchange columns, combined with an inductively coupled argon plasma mass spectrometer (ICP MS) were used to purify the As-BP in hamster plasma. SDS-PAGE was used to confirm the arsenic-binding proteins at each purification step. The three-step purification process successfully separated As-BP from other proteins (ie, arsenic unbound proteins) in hamster plasma. The molecular mass of purified As-BP in plasma was approximately 40-50 kD on SDS-PAGE. The three-step purification method is a simple and fast approach to purify the As-BP in plasma samples.
Illustrating Story Plans: Does a Mnemonic Strategy Including Art Media Render More Elaborate Text?
ERIC Educational Resources Information Center
Dunn, Michael W.
2012-01-01
Students who have difficulty with academics often benefit from learning mnemonic strategies which provide a step-by-step process to accomplish a task. Three fourth-grade students who struggled with writing learned the Ask, Reflect, Text (ART) strategy to help them produce more elaborate narrative story text. After initially asking the questions…
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.