Sample records for two-step procedure based

  1. Comparison of Methods for Demonstrating Passage of Time When Using Computer-Based Video Prompting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Bryant, Kathryn J.; Spencer, Galen P.; Ayres, Kevin M.

    2015-01-01

    Two different video-based procedures for presenting the passage of time (how long a step lasts) were examined. The two procedures were presented within the framework of video prompting to promote independent multi-step task completion across four young adults with moderate intellectual disability. The two procedures demonstrating passage of the…

  2. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  3. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  4. Solution of elliptic partial differential equations by fast Poisson solvers using a local relaxation factor. 2: Two-step method

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1986-01-01

    A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.

  5. Redo Laparoscopic Gastric Bypass: One-Step or Two-Step Procedure?

    PubMed

    Theunissen, Caroline M J; Guelinckx, Nele; Maring, John K; Langenhoff, Barbara S

    2016-11-01

    The adjustable gastric band (AGB) is a bariatric procedure that used to be widely performed. However, AGB failure-signifying band-related complications or unsatisfactory weight loss, resulting in revision surgery (redo operations)-frequently occurs. Often this entails a conversion to a laparoscopic Roux-en-Y gastric bypass (LRYGB). This can be performed as a one-step or two-step (separate band removal) procedure. Data were collected from patients operated from 2012 to 2014 in a single bariatric centre. We compared 107 redo LRYGB after AGB failure with 1020 primary LRYGB. An analysis was performed of the one-step vs. two-step redo procedures. All redo procedures were performed by experienced bariatric surgeons. No difference in major complication rate was seen (2.8 vs. 2.3 %, p = 0.73) between redo and primary LRYGB, and overall complication severity for redos was low (mainly Clavien-Dindo 1 or 2). Weight loss results were comparable for primary and redo procedures. The one-step and two-step redos were comparable regarding complication rates and readmissions. The operating time for the one-step redo LRYGB was 136 vs. 107.5 min for the two-step (median, p < 0.001), excluding the operating time of separate AGB removal (mean 61 min, range 36-110). Removal of a failed AGB and LRYGB in a one-step procedure is safe when performed by experienced bariatric surgeons. However, when erosion or perforation of the AGB occurs, we advise caution and would perform the redo LRYGB as a two-step procedure. Equal weights can be achieved at 1 year post redo LRYGB as after primary LRYGB procedures.

  6. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  7. Read Two Impress: An Intervention for Disfluent Readers

    ERIC Educational Resources Information Center

    Young, Chase; Rasinski, Timothy; Mohr, Kathleen A. J.

    2016-01-01

    The authors describe a research-based method to increase students' reading fluency. The method is called Read Two Impress, which is derived from the Neurological Impress Method and the method of repeated readings. The authors provide step-by-step procedures to effectively implement the reading fluency intervention. Previous research indicates that…

  8. Efficiency and Safety of One-Step Procedure Combined Laparoscopic Cholecystectomy and Eretrograde Cholangiopancreatography for Treatment of Cholecysto-Choledocholithiasis: A Randomized Controlled Trial.

    PubMed

    Liu, Zhiyi; Zhang, Luyao; Liu, Yanling; Gu, Yang; Sun, Tieliang

    2017-11-01

    We aimed to evaluate the efficiency and safety of one-step procedure combined endoscopic retrograde cholangiopancreatography (ERCP) and laparoscopic cholecystectomy (LC) for treatment of patients with cholecysto-choledocholithiasis. A prospective randomized study was performed on 63 consecutive cholecysto-choledocholithiasis patients during 2008 and 2011. The efficiency and safety of one-step procedure was assessed by comparing the two-step LC with ERCP + endoscopic sphincterotomy (EST). Outcomes including intraoperative features, postoperative features (length of stay and postoperative complications) were evaluated. One- or two-step procedure of LC with ERCP + EST was successfully performed in all patients, and common bile duct stones were completely removed. Statistical analyses showed that length of stay and pulmonary infection rate were significantly lower in the test group compared with that in the control group (P < 0.05), whereas no statistical difference in other outcomes was found between the two groups (all P > 0.05). The one-step procedure of LC with ERCP + EST is superior to the two-step procedure for treatment of patients with cholecysto-choledocholithiasis regarding to the reduced hospital stay and inhibited occurrence of pulmonary infections. Compared with two-step procedure, one-step procedure of LC with ERCP + EST may be a superior option for cholecysto-choledocholithiasis patients treatment regarding to hospital stay and pulmonary infections.

  9. Preparation and characterization of silica xerogels as carriers for drugs.

    PubMed

    Czarnobaj, K

    2008-11-01

    The aim of the present study was to utilize the sol-gel method to synthesize different forms of xerogel matrices for drugs and to investigate how the synthesis conditions and solubility of drugs influence the change of the profile of drug release and the structure of the matrices. Silica xerogels doped with drugs were prepared by the sol-gel method from a hydrolyzed tetraethoxysilane (TEOS) solution containing two model compounds: diclofenac diethylamine, (DD)--a water-soluble drug or ibuprofen, (IB)--a water insoluble drug. Two procedures were used for the synthesis of sol-gel derived materials: one-step procedure (the sol-gel reaction was carried out under acidic or basic conditions) and the two-step procedure (first, hydrolysis of TEOS was carried out under acidic conditions, and then condensation of silanol groups was carried out under basic conditions) in order to obtain samples with altered microstructures. In vitro release studies of drugs revealed a similar release profile in two steps: an initial diffusion-controlled release followed by a slower release rate. In all the cases studied, the released amount of DD was higher and the released time was shorter compared with IB for the same type of matrices. The released amount of drugs from two-step prepared xerogels was always lower than that from one-step base-catalyzed xerogels. One-step acid-catalyzed xerogels proved unsuitable as the carriers for the examined drugs.

  10. Performance of the Seven-Step Procedure in Problem-Based Hospitality Management Education

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2016-01-01

    The study focuses on the seven-step procedure (SSP) in problem-based learning (PBL). The way students apply the seven-step procedure will help us understand how students work in a problem-based learning curriculum. So far, little is known about how students rate the performance and importance of the different steps, the amount of time they spend…

  11. Two-dimensional solid-phase extraction strategy for the selective enrichment of aminoglycosides in milk.

    PubMed

    Shen, Aijin; Wei, Jie; Yan, Jingyu; Jin, Gaowa; Ding, Junjie; Yang, Bingcheng; Guo, Zhimou; Zhang, Feifang; Liang, Xinmiao

    2017-03-01

    An orthogonal two-dimensional solid-phase extraction strategy was established for the selective enrichment of three aminoglycosides including spectinomycin, streptomycin, and dihydrostreptomycin in milk. A reversed-phase liquid chromatography material (C 18 ) and a weak cation-exchange material (TGA) were integrated in a single solid-phase extraction cartridge. The feasibility of two-dimensional clean-up procedure that experienced two-step adsorption, two-step rinsing, and two-step elution was systematically investigated. Based on the orthogonality of reversed-phase and weak cation-exchange procedures, the two-dimensional solid-phase extraction strategy could minimize the interference from the hydrophobic matrix existing in traditional reversed-phase solid-phase extraction. In addition, high ionic strength in the extracts could be effectively removed before the second dimension of weak cation-exchange solid-phase extraction. Combined with liquid chromatography and tandem mass spectrometry, the optimized procedure was validated according to the European Union Commission directive 2002/657/EC. A good performance was achieved in terms of linearity, recovery, precision, decision limit, and detection capability in milk. Finally, the optimized two-dimensional clean-up procedure incorporated with liquid chromatography and tandem mass spectrometry was successfully applied to the rapid monitoring of aminoglycoside residues in milk. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Student Opinions about the Seven-Step Procedure in Problem-Based Hospitality Management Education

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2014-01-01

    This study investigates how hospitality management students appreciate the role and application of the seven-step procedure in problem-based learning. A survey was developed containing sections about personal characteristics, recall of the seven steps, overall report marks, and 30 statements about the seven-step procedure. The survey was…

  13. Label-free offline versus online activity methods for nucleoside diphosphate kinase b using high performance liquid chromatography.

    PubMed

    Lima, Juliana Maria; Salmazo Vieira, Plínio; Cavalcante de Oliveira, Arthur Henrique; Cardoso, Carmen Lúcia

    2016-08-07

    Nucleoside diphosphate kinase from Leishmania spp. (LmNDKb) has recently been described as a potential drug target to treat leishmaniasis disease. Therefore, screening of LmNDKb ligands requires methodologies that mimic the conditions under which LmNDKb acts in biological systems. Here, we compare two label-free methodologies that could help screen LmNDKb ligands and measure NDKb activity: an offline LC-UV assay for soluble LmNDKb and an online two-dimensional LC-UV system based on LmNDKb immobilised on a silica capillary. The target enzyme was immobilised on the silica capillary via Schiff base formation (to give LmNDKb-ICER-Schiff) or affinity attachment (to give LmNDKb-ICER-His). Several aspects of the ICERs resulting from these procedures were compared, namely kinetic parameters, stability, and procedure steps. Both the LmNDKb immobilisation routes minimised the conformational changes and preserved the substrate binding sites. However, considering the number of steps involved in the immobilisation procedure, the cost of reagents, and the stability of the immobilised enzyme, immobilisation via Schiff base formation proved to be the optimal procedure.

  14. The Move to Field Based Teacher Education: A Practical Guide for Field Hands. Teacher Education Forum; Volume 4, Number 14.

    ERIC Educational Resources Information Center

    Rockwood, Stacy F.

    This paper outlines the procedures used at the University of Cincinnati for establishing a field based elementary teacher education program in the form of a field guide. The first step involves a meeting with university faculty to discuss the implications of such a program. Step two involves meeting with the elementary school principal and selling…

  15. The Aristotle score: a complexity-adjusted method to evaluate surgical results.

    PubMed

    Lacour-Gayet, F; Clarke, D; Jacobs, J; Comas, J; Daebritz, S; Daenen, W; Gaynor, W; Hamilton, L; Jacobs, M; Maruszsewski, B; Pozzi, M; Spray, T; Stellin, G; Tchervenkov, C; Mavroudis And, C

    2004-06-01

    Quality control is difficult to achieve in Congenital Heart Surgery (CHS) because of the diversity of the procedures. It is particularly needed, considering the potential adverse outcomes associated with complex cases. The aim of this project was to develop a new method based on the complexity of the procedures. The Aristotle project, involving a panel of expert surgeons, started in 1999 and included 50 pediatric surgeons from 23 countries, representing the EACTS, STS, ECHSA and CHSS. The complexity was based on the procedures as defined by the STS/EACTS International Nomenclature and was undertaken in two steps: the first step was establishing the Basic Score, which adjusts only the complexity of the procedures. It is based on three factors: the potential for mortality, the potential for morbidity and the anticipated technical difficulty. A questionnaire was completed by the 50 centers. The second step was the development of the Comprehensive Aristotle Score, which further adjusts the complexity according to the specific patient characteristics. It includes two categories of complexity factors, the procedure dependent and independent factors. After considering the relationship between complexity and performance, the Aristotle Committee is proposing that: Performance = Complexity x Outcome. The Aristotle score, allows precise scoring of the complexity for 145 CHS procedures. One interesting notion coming out of this study is that complexity is a constant value for a given patient regardless of the center where he is operated. The Aristotle complexity score was further applied to 26 centers reporting to the EACTS congenital database. A new display of centers is presented based on the comparison of hospital survival to complexity and to our proposed definition of performance. A complexity-adjusted method named the Aristotle Score, based on the complexity of the surgical procedures has been developed by an international group of experts. The Aristotle score, electronically available, was introduced in the EACTS and STS databases. A validation process evaluating its predictive value is being developed.

  16. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  17. Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1986-01-01

    A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.

  18. A new one-step procedure for pulmonary valve implantation of the melody valve: Simultaneous prestenting and valve implantation.

    PubMed

    Boudjemline, Younes

    2018-01-01

    To describe a new modification, the one-step procedure, that allows interventionists to pre-stent and implant a Melody valve simultaneously. Percutaneous pulmonary valve implantation (PPVI) is the standard of care for managing patients with dysfunctional right ventricular outflow tract, and the approach is standardized. Patients undergoing PPVI using the one-step procedure were identified in our database. Procedural data and radiation exposure were compared to those in a matched group of patients who underwent PPVI using the conventional two-step procedure. Between January 2016 and January 2017, PPVI was performed in 27 patients (median age/range, 19.1/10-55 years) using the one-step procedure involving manual crimping of one to three bare metal stents over the Melody valve. The stent and Melody valve were delivered successfully using the Ensemble delivery system. No complications occurred. All patients had excellent hemodynamic results (median/range post-PPVI right ventricular to pulmonary artery gradient, 9/0-20 mmHg). Valve function was excellent. Median procedural and fluoroscopic times were 56 and 10.2 min, respectively, which significantly differed from those of the two-step procedure group. Similarly, the dose area product (DAP), and radiation time were statistically lower in the one-step group than in the two-step group (P < 0.001 for all variables). After a median follow-up of 8 months (range, 3-14.7), no patient underwent reintervention, and no device dysfunction was observed. The one-step procedure is a safe modification that allows interventionists to prestent and implants the Melody valve simultaneously. It significantly reduces procedural and fluoroscopic times, and radiation exposure. © 2017 Wiley Periodicals, Inc.

  19. Precise non-steady-state characterization of solid active materials with no preliminary mechanistic assumptions

    DOE PAGES

    Constales, Denis; Yablonsky, Gregory S.; Wang, Lucun; ...

    2017-04-25

    This paper presents a straightforward and user-friendly procedure for extracting a reactivity characterization of catalytic reactions on solid materials under non-steady-state conditions, particularly in temporal analysis of products (TAP) experiments. The kinetic parameters derived by this procedure can help with the development of detailed mechanistic understanding. The procedure consists of the following two major steps: 1) Three “Laplace reactivities” are first determined based on the moments of the exit flow pulse response data; 2) Depending on a select kinetic model, kinetic constants of elementary reaction steps can then be expressed as a function of reactivities and determined accordingly. In particular,more » we distinguish two calculation methods based on the availability and reliability of reactant and product data. The theoretical results are illustrated using a reverse example with given parameters as well as an experimental example of CO oxidation over a supported Au/SiO 2 catalyst. The procedure presented here provides an efficient tool for kinetic characterization of many complex chemical reactions.« less

  20. Architecture design of a generic centralized adjudication module integrated in a web-based clinical trial management system

    PubMed Central

    Zhao, Wenle; Pauls, Keith

    2015-01-01

    Background Centralized outcome adjudication has been used widely in multi-center clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network’s data management center within a homegrown clinical trial management system. In this paper, the system design strategy and database structure are presented. Methods A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. Results By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer one or two days. A total of 7,336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. Conclusions A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality. PMID:26464429

  1. Computer-Based Feedback in Linear Algebra: Effects on Transfer Performance and Motivation

    ERIC Educational Resources Information Center

    Corbalan, Gemma; Paas, Fred; Cuypers, Hans

    2010-01-01

    Two studies investigated the effects on students' perceptions (Study 1) and learning and motivation (Study 2) of different levels of feedback in mathematical problems. In these problems, an error made in one step of the problem-solving procedure will carry over to the following steps and consequently to the final solution. Providing immediate…

  2. STIR: Redox-Switchable Olefin Polymerization Catalysis: Electronically Tunable Ligands for Controlled Polymer Synthesis

    DTIC Science & Technology

    2013-03-28

    positions leading us to utilize a two-step procedure in which the amines were treated with methylchloroformate before being fully reduced with lithium ...was carried out using lithium aluminum hydride before undergoing a similar two-step methylation as described above to yield bisferrocenyl ligand 16...of Ni-based complex 30. CV’s were ran in DCM with tetrabutylammonium hexafluorophosphate electrolyte and referenced to a ferrocene standard. In

  3. Biologic considerations regarding the one and two step procedures in the management of patients with invasive carcinoma of the breast.

    PubMed

    Fisher, E R; Sass, R; Fisher, B

    1985-09-01

    Investigation of the biologic significance of delay between biopsy and mastectomy was performed upon women with invasive carcinoma of the breast in protocol four of the NSABP. Since the period of delay was two weeks or less in approximately 75 per cent, no comment concerning the possible effects of longer periods can be made. Life table analyses failed to reveal any difference in ten year survival rates between patients undergoing radical mastectomy management by the one and two step procedures. Similarly, no difference in adjusted ten year survival rate was observed between women managed by the two step procedure who did or did not have residual tumor identified in the mastectomy specimen after the first step or biopsy. Importantly, the clinical or pathologic stages, sizes of tumor or histologic grades were similar in women managed by the one and two step procedures minimizing selection bias. The material used also allowed for study of the possible causative role of biopsy of the breast on the development of sinus histiocytosis in regional axillary lymph nodes. No difference in degree or types of this nodal reaction could be discerned in the lymph nodes of the mastectomy specimens obtained from patients who had undergone the one and two step procedures. This finding indicates that nodal sinus histiocytosis is indeed related to the neoplastic process, albeit in an undefined manner, rather than the trauma of biopsy per se as has been suggested. These results do not invalidate the use of the one step procedure in the management of patients with carcinoma of the breast. Indeed, it is highly likely that it will be commonly used now that breast-conserving operations appear to represent a viable alternative modality for the primary surgical treatment of carcinoma of the breast. Yet, it is apparent that the one step procedure will be performed for technical and practical rather than biologic reasons.

  4. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  5. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based proceduremore » system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.« less

  6. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevantmore » steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator followed along with the computer-based procedure. After each conducted functional test the operators were asked a series of questions designed to provide feedback on the feasibility to use a CBP system in the plant and the general user experience of the CBP system. This paper will describe the field evaluation and its results in detail. For example, the result shows that the context driven job aids and the incorporated human performance tools are much liked by the auxiliary operators. The paper will describe and present initial findings from a second field evaluation conducted at second nuclear utility. For this field evaluation a preventive maintenance work order for the HVAC system was used. In addition, there will be a description of the method and objective of two field evaluations planned to be conducted late 2014 or early 2015.« less

  7. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  8. FDI based on Artificial Neural Network for Low-Voltage-Ride-Through in DFIG-based Wind Turbine.

    PubMed

    Adouni, Amel; Chariag, Dhia; Diallo, Demba; Ben Hamed, Mouna; Sbita, Lassaâd

    2016-09-01

    As per modern electrical grid rules, Wind Turbine needs to operate continually even in presence severe grid faults as Low Voltage Ride Through (LVRT). Hence, a new LVRT Fault Detection and Identification (FDI) procedure has been developed to take the appropriate decision in order to develop the convenient control strategy. To obtain much better decision and enhanced FDI during grid fault, the proposed procedure is based on voltage indicators analysis using a new Artificial Neural Network architecture (ANN). In fact, two features are extracted (the amplitude and the angle phase). It is divided into two steps. The first is fault indicators generation and the second is indicators analysis for fault diagnosis. The first step is composed of six ANNs which are dedicated to describe the three phases of the grid (three amplitudes and three angle phases). Regarding to the second step, it is composed of a single ANN which analysis the indicators and generates a decision signal that describes the function mode (healthy or faulty). On other hand, the decision signal identifies the fault type. It allows distinguishing between the four faulty types. The diagnosis procedure is tested in simulation and experimental prototype. The obtained results confirm and approve its efficiency, rapidity, robustness and immunity to the noise and unknown inputs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Architecture design of a generic centralized adjudication module integrated in a web-based clinical trial management system.

    PubMed

    Zhao, Wenle; Pauls, Keith

    2016-04-01

    Centralized outcome adjudication has been used widely in multicenter clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network's data management center within a homegrown clinical trial management system. In this article, the system design strategy and database structure are presented. A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer 1 or 2 days. A total of 7336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality. © The Author(s) 2015.

  10. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  11. Helping Students Adapt to Computer-Based Encrypted Examinations

    ERIC Educational Resources Information Center

    Baker-Eveleth, Lori; Eveleth, Daniel M.; O'Neill, Michele; Stone, Robert W.

    2006-01-01

    The College of Business and Economics at the University of Idaho conducted a pilot study that used commercially available encryption software called Securexam to deliver computer-based examinations. A multi-step implementation procedure was developed, implemented, and then evaluated on the basis of what students viewed as valuable. Two key aspects…

  12. Two-step liquid phase microextraction combined with capillary electrophoresis: a new approach to simultaneous determination of basic and zwitterionic compounds.

    PubMed

    Nojavan, Saeed; Moharami, Arezoo; Fakhari, Ali Reza

    2012-08-01

    In this work, two-step hollow fiber-based liquid-phase microextraction procedure was evaluated for extraction of the zwitterionic cetirizine (CTZ) and basic hydroxyzine (HZ) in human plasma. In the first step of extraction, the pH of sample was adjusted at 5.0 in order to promote liquid-phase microextraction of the zwitterionic CTZ. In the second step, the pH of sample was increased up to 11.0 for extraction of basic HZ. In this procedure, the extraction times for the first and the second steps were 30 and 20 min, respectively. Owing to the high ratio between the volumes of donor phase and acceptor phase, CTZ and HZ were enriched by factors of 280 and 355, respectively. The linearity of the analytical method was investigated for both compounds in the range of 10-500 ng mL(-1) (R(2) > 0.999). Limit of quantification (S/N = 10) for CTZ and HZ was 10 ng mL(-1) , while the limit of detection was 3 ng mL(-1) for both compounds at a signal to noise ratio of 3:1. Intraday and interday relative standard deviations (RSDs, n = 6) were in the range of 6.5-16.2%. This procedure enabled CTZ and HZ to be analyzed simultaneously by capillary electrophoresis. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Displacement-dispersive liquid-liquid microextraction based on solidification of floating organic drop of trace amounts of palladium in water and road dust samples prior to graphite furnace atomic absorption spectrometry determination.

    PubMed

    Ghanbarian, Maryam; Afzali, Daryoush; Mostafavi, Ali; Fathirad, Fariba

    2013-01-01

    A new displacement-dispersive liquid-liquid microextraction method based on the solidification of floating organic drop was developed for separation and preconcentration of Pd(ll) in road dust and aqueous samples. This method involves two steps of dispersive liquid-liquid microextraction based on solidification. In Step 1, Cu ions react with diethyldithiocarbamate (DDTC) to form Cu-DDTC complex, which is extracted by dispersive liquid-liquid microextraction based on a solidification procedure using 1-undecanol (extraction solvent) and ethanol (dispersive solvent). In Step 2, the extracted complex is first dispersed using ethanol in a sample solution containing Pd ions, then a dispersive liquid-liquid microextraction based on a solidification procedure is performed creating an organic drop. In this step, Pd(ll) replaces Cu(ll) from the pre-extracted Cu-DDTC complex and goes into the extraction solvent phase. Finally, the Pd(ll)-containing drop is introduced into a graphite furnace using a microsyringe, and Pd(ll) is determined using atomic absorption spectrometry. Several factors that influence the extraction efficiency of Pd and its subsequent determination, such as extraction and dispersive solvent type and volume, pH of sample solution, centrifugation time, and concentration of DDTC, are optimized.

  15. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  16. Reverse-Time Imaging Based on Full-Waveform Inverted Velocity Model for Nondestructive Testing of Heterogeneous Engineered Structures

    NASA Astrophysics Data System (ADS)

    Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.

    2017-12-01

    Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.

  17. Direct Sensor Orientation of a Land-Based Mobile Mapping System

    PubMed Central

    Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua

    2011-01-01

    A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015

  18. Simulation center training as a means to improve resident performance in percutaneous noncontinuous CT-guided fluoroscopic procedures with dose reduction.

    PubMed

    Mendiratta-Lala, Mishal; Williams, Todd R; Mendiratta, Vivek; Ahmed, Hafeez; Bonnett, John W

    2015-04-01

    The purpose of this study was to evaluate the effectiveness of a multifaceted simulation-based resident training for CT-guided fluoroscopic procedures by measuring procedural and technical skills, radiation dose, and procedure times before and after simulation training. A prospective analysis included 40 radiology residents and eight staff radiologists. Residents took an online pretest to assess baseline procedural knowledge. Second-through fourth-year residents' baseline technical skills with a procedural phantom were evaluated. First-through third-year residents then underwent formal didactic and simulation-based procedural and technical training with one of two interventional radiologists and followed the training with 1 month of supervised phantom-based practice. Thereafter, residents underwent final written and practical examinations. The practical examination included essential items from a 20-point checklist, including site and side marking, consent, time-out, and sterile technique along with a technical skills portion assessing pedal steps, radiation dose, needle redirects, and procedure time. The results indicated statistically significant improvement in procedural and technical skills after simulation training. For residents, the median number of pedal steps decreased by three (p=0.001), median dose decreased by 15.4 mGy (p<0.001), median procedure time decreased by 4.0 minutes (p<0.001), median number of needle redirects decreased by 1.0 (p=0.005), and median number of 20-point checklist items successfully completed increased by three (p<0.001). The results suggest that procedural skills can be acquired and improved by simulation-based training of residents, regardless of experience. CT simulation training decreases procedural time, decreases radiation dose, and improves resident efficiency and confidence, which may transfer to clinical practice with improved patient care and safety.

  19. NASA Spinoff Article: Automated Procedures To Improve Safety on Oil Rigs

    NASA Technical Reports Server (NTRS)

    Garud, Sumedha

    2013-01-01

    On May 11th, 2013, two astronauts emerged from the interior of the International Space Station (ISS) and worked their way toward the far end of spacecraft. Over the next 51/2 hours, the two replaced an ammonia pump that had developed a significant leak a few days before. On the ISS, ammonia serves the vital role of cooling components-in this case, one of the station's eight solar arrays. Throughout the extravehicular activity (EVA), the astronauts stayed in constant contact with mission control: every movement, every action strictly followed a carefully planned set of procedures to maximize crew safety and the chances of success. Though the leak had come as a surprise, NASA was prepared to handle it swiftly thanks in part to the thousands of procedures that have been written to cover every aspect of the ISS's operations. The ISS is not unique in this regard: Every NASA mission requires well-written procedures-or detailed lists of step-by-step instructions-that cover how to operate equipment in any scenario, from normal operations to the challenges created by malfunctioning hardware or software. Astronauts and mission control train and drill extensively in procedures to ensure they know what the proper procedures are and when they should be used. These procedures used to be exclusively written on paper, but over the past decade, NASA has transitioned to digital formats. Electronic-based documentation simplifies storage and use, allowing astronauts and flight controllers to find instructions more quickly and display them through a variety of media. Electronic procedures are also a crucial step toward automation: once instructions are digital, procedure display software can be designed to assist in authoring, reviewing, and even executing them.

  20. Vitrification of zona-free rabbit expanded or hatching blastocysts: a possible model for human blastocysts.

    PubMed

    Cervera, R P; Garcia-Ximénez, F

    2003-10-01

    The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.

  1. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  2. MO-DE-207A-10: One-Step CT Reconstruction for Metal Artifact Reduction by a Modification of Penalized Weighted Least-Squares (PWLS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J

    Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less

  3. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    PubMed Central

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-01-01

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods. PMID:27669250

  4. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    PubMed

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  5. Preparation of hydrophobic organic aeorgels

    DOEpatents

    Baumann, Theodore F.; Satcher, Jr., Joe H.; Gash, Alexander E.

    2007-11-06

    Synthetic methods for the preparation of hydrophobic organics aerogels. One method involves the sol-gel polymerization of 1,3-dimethoxybenzene or 1,3,5-trimethoxybenzene with formaldehyde in non-aqueous solvents. Using a procedure analogous to the preparation of resorcinol-formaldehyde (RF) aerogels, this approach generates wet gels that can be dried using either supercritical solvent extraction to generate the new organic aerogels or air dried to produce an xerogel. Other methods involve the sol-gel polymerization of 1,3,5 trihydroxy benzene (phloroglucinol) or 1,3 dihydroxy benzene (resorcinol) and various aldehydes in non-aqueous solvents. These methods use a procedure analogous to the one-step base and two-step base/acid catalyzed polycondensation of phloroglucinol and formaldehyde, but the base catalyst used is triethylamine. These methods can be applied to a variety of other sol-gel precursors and solvent systems. These hydrophobic organics aerogels have numerous application potentials in the field of material absorbers and water-proof insulation.

  6. Preparation of hydrophobic organic aeorgels

    DOEpatents

    Baumann, Theodore F.; Satcher, Jr., Joe H.; Gash, Alexander E.

    2004-10-19

    Synthetic methods for the preparation of hydrophobic organics aerogels. One method involves the sol-gel polymerization of 1,3-dimethoxybenzene or 1,3,5-trimethoxybenzene with formaldehyde in non-aqueous solvents. Using a procedure analogous to the preparation of resorcinol-formaldehyde (RF) aerogels, this approach generates wet gels that can be dried using either supercritical solvent extraction to generate the new organic aerogels or air dried to produce an xerogel. Other methods involve the sol-gel polymerization of 1,3,5 trihydroxy benzene (phloroglucinol) or 1,3 dihydroxy benzene (resorcinol) and various aldehydes in non-aqueous solvents. These methods use a procedure analogous to the one-step base and two-step base/acid catalyzed polycondensation of phloroglucinol and formaldehyde, but the base catalyst used is triethylamine. These methods can be applied to a variety of other sol-gel precursors and solvent systems. These hydrophobic organics aerogels have numerous application potentials in the field of material absorbers and water-proof insulation.

  7. Synthesis of soybean oil-based polymeric surfactants in supercritical carbon dioxide and investigation of their surface properties

    USDA-ARS?s Scientific Manuscript database

    This paper reports the preparation of polymeric surfactants (HPSO) via a two-step synthetic procedure: polymerization of soybean oil (PSO) in supercritical carbon dioxide and followed by hydrolysis of PSO with a base. HPSO was characterized and identified by using a combination of FTIR, 1H NMR, 13C...

  8. SU-F-J-66: Anatomy Deformation Based Comparison Between One-Step and Two-Step Optimization for Online ART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Z; Yu, G; Qin, S

    Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation,more » adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency and accuracy as target occurred position displacement. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less

  9. Evaluating the efficiency of a zakat institution over a period of time using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Krishnan, Anath Rau; Hamzah, Ahmad Aizuddin

    2017-08-01

    It is crucial for a zakat institution to evaluate and understand how efficiently they have operated in the past, thus ideal strategies could be developed for future improvement. However, evaluating the efficiency of a zakat institution is actually a challenging process as it involves the presence of multiple inputs or/and outputs. This paper proposes a step-by-step procedure comprising two data envelopment analysis models, namely dual Charnes-Cooper-Rhodes and slack-based model to quantitatively measure the overall efficiency of a zakat institution over a period of time. The applicability of the proposed procedure was demonstrated by evaluating the efficiency of Pusat Zakat Sabah, Malaysia from the year of 2007 up to 2015 by treating each year as a decision making unit. Two inputs (i.e. number of staff and number of branches) and two outputs (i.e. total collection and total distribution) were used to measure the overall efficiency achieved each year. The causes of inefficiency and strategy for future improvement were discussed based on the results.

  10. Modeling Woven Polymer Matrix Composites with MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M. (Technical Monitor)

    2000-01-01

    NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) is used to predict the elastic properties of plain weave polymer matrix composites (PMCs). The traditional one step three-dimensional homogertization procedure that has been used in conjunction with MAC/GMC for modeling woven composites in the past is inaccurate due to the lack of shear coupling inherent to the model. However, by performing a two step homogenization procedure in which the woven composite repeating unit cell is homogenized independently in the through-thickness direction prior to homogenization in the plane of the weave, MAC/GMC can now accurately model woven PMCs. This two step procedure is outlined and implemented, and predictions are compared with results from the traditional one step approach and other models and experiments from the literature. Full coupling of this two step technique with MAC/ GMC will result in a widely applicable, efficient, and accurate tool for the design and analysis of woven composite materials and structures.

  11. A New Two-Step Approach for Hands-On Teaching of Gene Technology: Effects on Students' Activities During Experimentation in an Outreach Gene Technology Lab

    NASA Astrophysics Data System (ADS)

    Scharfenberg, Franz-Josef; Bogner, Franz X.

    2011-08-01

    Emphasis on improving higher level biology education continues. A new two-step approach to the experimental phases within an outreach gene technology lab, derived from cognitive load theory, is presented. We compared our approach using a quasi-experimental design with the conventional one-step mode. The difference consisted of additional focused discussions combined with students writing down their ideas (step one) prior to starting any experimental procedure (step two). We monitored students' activities during the experimental phases by continuously videotaping 20 work groups within each approach ( N = 131). Subsequent classification of students' activities yielded 10 categories (with well-fitting intra- and inter-observer scores with respect to reliability). Based on the students' individual time budgets, we evaluated students' roles during experimentation from their prevalent activities (by independently using two cluster analysis methods). Independently of the approach, two common clusters emerged, which we labeled as `all-rounders' and as `passive students', and two clusters specific to each approach: `observers' as well as `high-experimenters' were identified only within the one-step approach whereas under the two-step conditions `managers' and `scribes' were identified. Potential changes in group-leadership style during experimentation are discussed, and conclusions for optimizing science teaching are drawn.

  12. Direct methylation procedure for converting fatty amides to fatty acid methyl esters in feed and digesta samples.

    PubMed

    Jenkins, T C; Thies, E J; Mosley, E E

    2001-05-01

    Two direct methylation procedures often used for the analysis of total fatty acids in biological samples were evaluated for their application to samples containing fatty amides. Methylation of 5 mg of oleamide (cis-9-octadecenamide) in a one-step (methanolic HCl for 2 h at 70 degrees C) or a two-step (sodium methoxide for 10 min at 50 degrees C followed by methanolic HCl for 10 min at 80 degrees C) procedure gave 59 and 16% conversions of oleamide to oleic acid, respectively. Oleic acid recovery from oleamide was increased to 100% when the incubation in methanolic HCl was lengthened to 16 h and increased to 103% when the incubation in methoxide was modified to 24 h at 100 degrees C. However, conversion of oleamide to oleic acid in an animal feed sample was incomplete for the modified (24 h) two-step procedure but complete for the modified (16 h) one-step procedure. Unsaturated fatty amides in feed and digesta samples can be converted to fatty acid methyl esters by incubation in methanolic HCl if the time of exposure to the acid catalyst is extended from 2 to 16 h.

  13. Microfluidic step-emulsification in a cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Chakraborty, Indrajit; Leshansky, Alexander M.

    2016-11-01

    The model microfluidic device for high-throughput droplet generation in a confined cylindrical geometry is investigated numerically. The device comprises of core-annular pressure-driven flow of two immiscible viscous liquids through a cylindrical capillary connected co-axially to a tube of a larger diameter through a sudden expansion, mimicking the microfluidic step-emulsifier (1). To study this problem, the numerical simulations of axisymmetric Navier-Stokes equations have been carried out using an interface capturing procedure based on coupled level set and volume-of-fluid (CLSVOF) methods. The accuracy of the numerical method was favorably tested vs. the predictions of the linear stability analysis of core-annular two-phase flow in a cylindrical capillary. Three distinct flow regimes can be identified: the dripping (D) instability near the entrance to the capillary, the step- (S) and the balloon- (B) emulsification at the step-like expansion. Based on the simulation results we present the phase diagram quantifying transitions between various regimes in plane of the capillary number and the flow-rate ratio. MICROFLUSA EU H2020 project.

  14. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  15. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  16. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  17. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  18. A rapid, one step molecular identification of Trichoderma citrinoviride and Trichoderma reesei.

    PubMed

    Saroj, Dina B; Dengeti, Shrinivas N; Aher, Supriya; Gupta, Anil K

    2015-06-01

    Trichoderma species are widely used as production hosts for industrial enzymes. Identification of Trichoderma species requires a complex molecular biology based identification involving amplification and sequencing of multiple genes. Industrial laboratories are required to run identification tests repeatedly in cell banking procedures and also to prove absence of production host in the product. Such demands can be fulfilled by a brief method which enables confirmation of strain identity. This communication describes one step identification method for two common Trichoderma species; T. citrinoviride and T. reesei, based on identification of polymorphic region in the nucleotide sequence of translation elongation factor 1 alpha. A unique forward primer and common reverse primer resulted in 153 and 139 bp amplicon for T. citrinoviride and T. reesei, respectively. Simplification was further introduced by using mycelium as template for PCR amplification. Method described in this communication allows rapid, one step identification of two Trichoderma species.

  19. Detailed computational procedure for design of cascade blades with prescribed velocity distributions in compressible potential flows

    NASA Technical Reports Server (NTRS)

    Costello, George R; Cummings, Robert L; Sinnette, John T , Jr

    1952-01-01

    A detailed step-by-step computational outline is presented for the design of two-dimensional cascade blades having a prescribed velocity distribution on the blade in a potential flow of the usual compressible fluid. The outline is based on the assumption that the magnitude of the velocity in the flow of the usual compressible nonviscous fluid is proportional to the magnitude of the velocity in the flow of a compressible nonviscous fluid with linear pressure-volume relation.

  20. AKAPS Act in a Two-Step Mechanism of Memory Acquisition

    PubMed Central

    Scheunemann, Lisa; Skroblin, Philipp; Hundsrucker, Christian; Klussmann, Enno; Efetova, Marina

    2013-01-01

    Defining the molecular and neuronal basis of associative memories is based upon behavioral preparations that yield high performance due to selection of salient stimuli, strong reinforcement, and repeated conditioning trials. One of those preparations is the Drosophila aversive olfactory conditioning procedure where animals initiate multiple memory components after experience of a single cycle training procedure. Here, we explored the analysis of acquisition dynamics as a means to define memory components and revealed strong correlations between particular chronologies of shock impact and number experienced during the associative training situation and subsequent performance of conditioned avoidance. Analyzing acquisition dynamics in Drosophila memory mutants revealed that rutabaga (rut)-dependent cAMP signals couple in a divergent fashion for support of different memory components. In case of anesthesia-sensitive memory (ASM) we identified a characteristic two-step mechanism that links rut-AC1 to A-kinase anchoring proteins (AKAP)-sequestered protein kinase A at the level of Kenyon cells, a recognized center of olfactory learning within the fly brain. We propose that integration of rut-derived cAMP signals at level of AKAPs might serve as counting register that accounts for the two-step mechanism of ASM acquisition. PMID:24174675

  1. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  2. The Predictive Validity of Using Admissions Testing and Multiple Mini-Interviews in Undergraduate University Admissions

    ERIC Educational Resources Information Center

    Makransky, Guido; Havmose, Philip; Vang, Maria Louison; Andersen, Tonny Elmose; Nielsen, Tine

    2017-01-01

    The aim of this study was to evaluate the predictive validity of a two-step admissions procedure that included a cognitive ability test followed by multiple mini-interviews (MMIs) used to assess non-cognitive skills, compared to grade-based admissions relative to subsequent drop-out rates and academic achievement after one and two years of study.…

  3. Analytic methods for design of wave cycles for wave rotor core engines

    NASA Technical Reports Server (NTRS)

    Resler, Edwin L., Jr.; Mocsari, Jeffrey C.; Nalim, M. R.

    1993-01-01

    A procedure to design a preliminary wave rotor cycle for any application is presented. To complete a cycle with heat addition there are two separate but related design steps that must be followed. The 'wave' boundary conditions determine the allowable amount of heat added in any case and the ensuing wave pattern requires certain pressure discharge conditions to allow the process to be made cyclic. This procedure, when applied, gives a first estimate of the cycle performance and the necessary information for the next step in the design process, namely the application of a characteristic based or other appropriate detailed one dimensional wave calculation that locates the proper porting around the periphery of the wave rotor. Four examples of the design procedure are given to demonstrate its utility and generality. These examples also illustrate the large gains in performance that could be realized with the use of wave rotor enhanced propulsion cycles.

  4. How to Create Videos for Extension Education: An Innovative Five-Step Procedure

    ERIC Educational Resources Information Center

    Dev, Dipti A.; Blitch, Kimberly A.; Hatton-Bowers, Holly; Ramsay, Samantha; Garcia, Aileen S.

    2018-01-01

    Although the benefits of using video as a learning tool in Extension programs are well known, less is understood about effective methods for creating videos. We present a five-step procedure for developing educational videos that focus on evidence-based practices, and we provide practical examples from our use of the five steps in creating a video…

  5. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  6. The importance of a two-step impression procedure for complete denture fabrication: a systematic review of the literature.

    PubMed

    Regis, R R; Alves, C C S; Rocha, S S M; Negreiros, W A; Freitas-Pontes, K M

    2016-10-01

    The literature has questioned the real need for some clinical and laboratory procedures considered essential for achieving better results for complete denture fabrication. The aim of this study was to review the current literature concerning the relevance of a two-step impression procedure to achieve better clinical results in fabricating conventional complete dentures. Through an electronic search strategy of the PubMed/MEDLINE database, randomised controlled clinical trials which compared complete denture fabrication in adults in which one or two steps of impressions occurred were identified. The selections were made by three independent reviewers. Among the 540 titles initially identified, four studies (seven published papers) reporting on 257 patients evaluating aspects such as oral health-related quality of life, patient satisfaction with dentures in use, masticatory performance and chewing ability, denture quality, direct and indirect costs were considered eligible. The quality of included studies was assessed according to the Cochrane guidelines. The clinical studies considered for this review suggest that a two-step impression procedure may not be mandatory for the success of conventional complete denture fabrication regarding a variety of clinical aspects of denture quality and patients' perceptions of the treatment. © 2016 John Wiley & Sons Ltd.

  7. Techniques for land use change detection using Landsat imagery

    NASA Technical Reports Server (NTRS)

    Angelici, G. L.; Bryant, N. A.; Friedman, S. Z.

    1977-01-01

    A variety of procedures were developed for the delineation of areas of land use change using Landsat Multispectral Scanner data and the generation of statistics revealing the nature of the changes involved (i.e., number of acres changed from rural to urban). Techniques of the Image Based Information System were utilized in all stages of the procedure, from logging the Landsat data and registering two frames of imagery, to extracting the changed areas and printing tabulations of land use change in acres. Two alternative methods of delineating land use change are presented while enumerating the steps of the entire process. The Houston, Texas urban area, and the Orlando, Florida urban area, are used as illustrative examples of various procedures.

  8. High speed inviscid compressible flow by the finite element method

    NASA Technical Reports Server (NTRS)

    Zienkiewicz, O. C.; Loehner, R.; Morgan, K.

    1984-01-01

    The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.

  9. Hybrid approach to structure modeling of the histamine H3 receptor: Multi-level assessment as a tool for model verification.

    PubMed

    Jończyk, Jakub; Malawska, Barbara; Bajda, Marek

    2017-01-01

    The crucial role of G-protein coupled receptors and the significant achievements associated with a better understanding of the spatial structure of known receptors in this family encouraged us to undertake a study on the histamine H3 receptor, whose crystal structure is still unresolved. The latest literature data and availability of different software enabled us to build homology models of higher accuracy than previously published ones. The new models are expected to be closer to crystal structures; and therefore, they are much more helpful in the design of potential ligands. In this article, we describe the generation of homology models with the use of diverse tools and a hybrid assessment. Our study incorporates a hybrid assessment connecting knowledge-based scoring algorithms with a two-step ligand-based docking procedure. Knowledge-based scoring employs probability theory for global energy minimum determination based on information about native amino acid conformation from a dataset of experimentally determined protein structures. For a two-step docking procedure two programs were applied: GOLD was used in the first step and Glide in the second. Hybrid approaches offer advantages by combining various theoretical methods in one modeling algorithm. The biggest advantage of hybrid methods is their intrinsic ability to self-update and self-refine when additional structural data are acquired. Moreover, the diversity of computational methods and structural data used in hybrid approaches for structure prediction limit inaccuracies resulting from theoretical approximations or fuzziness of experimental data. The results of docking to the new H3 receptor model allowed us to analyze ligand-receptor interactions for reference compounds.

  10. A five-step procedure for the clinical use of the MPD in neuropsychological assessment of children.

    PubMed

    Wallbrown, F H; Fuller, G B

    1984-01-01

    Described a five-step procedure that can be used to detect organicity on the basis of children's performance on the Minnesota Percepto Diagnostic Test (MPD). The first step consists of examining the T score for rotations to determine whether it is below the cut-off score, which has been established empirically as an indicator of organicity. The second step consists of matching the examinee's configuration of error scores, separation of circle-diamond (SpCD), distortion of circle-diamond (DCD), and distortion of dots (DD), with empirically derived tables. The third step consists of considering the T score for rotations and error configuration jointly. The fourth step consists of using empirically established discriminant equations, and the fifth step involves using data from limits testing and other data sources. The clinical and empirical bases for the five-step procedure also are discussed.

  11. Biodiesel production from waste frying oils and its quality control.

    PubMed

    Sabudak, T; Yildiz, M

    2010-05-01

    The use of biodiesel as fuel from alternative sources has increased considerably over recent years, affording numerous environmental benefits. Biodiesel an alternative fuel for diesel engines is produced from renewable sources such as vegetable oils or animal fats. However, the high costs implicated in marketing biodiesel constitute a major obstacle. To this regard therefore, the use of waste frying oils (WFO) should produce a marked reduction in the cost of biodiesel due to the ready availability of WFO at a relatively low price. In the present study waste frying oils collected from several McDonald's restaurants in Istanbul, were used to produce biodiesel. Biodiesel from WFO was prepared by means of three different transesterification processes: a one-step base-catalyzed, a two-step base-catalyzed and a two-step acid-catalyzed transesterification followed by base transesterification. No detailed previous studies providing information for a two-step acid-catalyzed transesterification followed by a base (CH(3)ONa) transesterification are present in literature. Each reaction was allowed to take place with and without tetrahydrofuran added as a co-solvent. Following production, three different procedures; washing with distilled water, dry wash with magnesol and using ion-exchange resin were applied to purify biodiesel and the best outcome determined. The biodiesel obtained to verify compliance with the European Standard 14214 (EN 14214), which also corresponds to Turkish Biodiesel Standards. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  12. Flu Diagnosis System Using Jaccard Index and Rough Set Approaches

    NASA Astrophysics Data System (ADS)

    Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip

    2018-04-01

    Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.

  13. Procedural key steps in laparoscopic colorectal surgery, consensus through Delphi methodology.

    PubMed

    Dijkstra, Frederieke A; Bosker, Robbert J I; Veeger, Nicolaas J G M; van Det, Marc J; Pierie, Jean Pierre E N

    2015-09-01

    While several procedural training curricula in laparoscopic colorectal surgery have been validated and published, none have focused on dividing surgical procedures into well-identified segments, which can be trained and assessed separately. This enables the surgeon and resident to focus on a specific segment, or combination of segments, of a procedure. Furthermore, it will provide a consistent and uniform method of training for residents rotating through different teaching hospitals. The goal of this study was to determine consensus on the key steps of laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy among experts in our University Medical Center and affiliated hospitals. This will form the basis for the INVEST video-assisted side-by-side training curriculum. The Delphi method was used for determining consensus on key steps of both procedures. A list of 31 steps for laparoscopic right hemicolectomy and 37 steps for laparoscopic sigmoid colectomy was compiled from textbooks and national and international guidelines. In an online questionnaire, 22 experts in 12 hospitals within our teaching region were invited to rate all steps on a Likert scale on importance for the procedure. Consensus was reached in two rounds. Sixteen experts agreed to participate. Of these 16 experts, 14 (88%) completed the questionnaire for both procedures. Of the 14 who completed the first round, 13 (93%) completed the second round. Cronbach's alpha was 0.79 for the right hemicolectomy and 0.91 for the sigmoid colectomy, showing high internal consistency between the experts. For the right hemicolectomy, 25 key steps were established; for the sigmoid colectomy, 24 key steps were established. Expert consensus on the key steps for laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy was reached. These key steps will form the basis for a video-assisted teaching curriculum.

  14. An illustration of new methods in machine condition monitoring, Part I: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.

    2017-05-01

    There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.

  15. 48 CFR 6.102 - Use of competitive procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ACQUISITION PLANNING COMPETITION REQUIREMENTS Full and Open Competition 6.102 Use of competitive procedures. The competitive procedures available for use in fulfilling the requirement for full and open... procedures (e.g., two-step sealed bidding). (d) Other competitive procedures. (1) Selection of sources for...

  16. Context-Based Urban Terrain Reconstruction from Uav-Videos for Geoinformation Applications

    NASA Astrophysics Data System (ADS)

    Bulatov, D.; Solbrig, P.; Gross, H.; Wernerus, P.; Repasi, E.; Heipke, C.

    2011-09-01

    Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.

  17. A two-step method for developing a control rod program for boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less

  18. A Two-Stage Procedure Toward the Efficient Implementation of PANS and Other Hybrid Turbulence Models

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Girimaji, Sharath S.

    2004-01-01

    The main objective of this article is to introduce and to show the implementation of a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for Partial Averaged Navier-Stokes (PANS) and other hybrid models. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The first step is to solve the unsteady or steady Reynolds Averaged Navier-Stokes (URANS/RANS) equations. From this preprocessing step, the turbulence length-scale field is obtained. This is then used to compute the characteristic length-scale ratio between the turbulence scale and the grid spacing. Based on this ratio, we can assess the finest scale resolution that a given grid for a given flow can support. Along with other additional criteria, we are able to analytically identify the appropriate hybrid solver resolution for different regions of the flow. This procedure removes the grid dependency issue that affects the results produced by different hybrid procedures in solving unsteady flows. The formulation, implementation methodology, and validation example are presented. We implemented this capability in a production Computational Fluid Dynamics (CFD) code, PAB3D, for the simulation of unsteady flows.

  19. One-pot, two-step desymmetrization of symmetrical benzils catalyzed by the methylsulfinyl (dimsyl) anion.

    PubMed

    Ragno, Daniele; Bortolini, Olga; Giovannini, Pier Paolo; Massi, Alessandro; Pacifico, Salvatore; Zaghi, Anna

    2014-08-14

    An operationally simple one-pot, two-step procedure for the desymmetrization of benzils is herein described. This consists in the chemoselective cross-benzoin reaction of symmetrical benzils with aromatic aldehydes catalyzed by the methyl sulfinyl (dimsyl) anion, followed by microwave-assisted oxidation of the resulting benzoylated benzoins with nitrate, avoiding the costly isolation procedure. Both electron-withdrawing and electron-donating substituents may be accommodated on the aromatic rings of the final unsymmetrical benzil.

  20. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  1. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors

    PubMed Central

    Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535

  2. Electronic Procedures for Medical Operations

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Electronic procedures are replacing text-based documents for recording the steps in performing medical operations aboard the International Space Station. S&K Aerospace, LLC, has developed a content-based electronic system-based on the Extensible Markup Language (XML) standard-that separates text from formatting standards and tags items contained in procedures so they can be recognized by other electronic systems. For example, to change a standard format, electronic procedures are changed in a single batch process, and the entire body of procedures will have the new format. Procedures can be quickly searched to determine which are affected by software and hardware changes. Similarly, procedures are easily shared with other electronic systems. The system also enables real-time data capture and automatic bookmarking of current procedure steps. In Phase II of the project, S&K Aerospace developed a Procedure Representation Language (PRL) and tools to support the creation and maintenance of electronic procedures for medical operations. The goal is to develop these tools in such a way that new advances can be inserted easily, leading to an eventual medical decision support system.

  3. A method for generating reliable atomistic models of amorphous polymers based on a random search of energy minima

    NASA Astrophysics Data System (ADS)

    Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos

    2005-07-01

    A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.

  4. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johanna H Oxstrand; Katya L Le Blanc

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less

  5. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  6. Two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy.

    PubMed

    Liu, Yu; Li, Ji-Jia; Zu, Peng; Liu, Hong-Xu; Yu, Zhan-Wu; Ren, Yi

    2017-12-07

    To introduce a two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy and assess its clinical application. One hundred and twenty-two patients with middle or lower esophageal cancer who underwent laparoscopic-thoracoscopic Ivor-Lewis esophagectomy at Liaoning Cancer Hospital and Institute from March 2014 to March 2016 were included in this study, and divided into two groups based on the procedure used for creating a gastric tube. One group used a two-step method for creating a gastric tube, and the other group used the conventional method. The two groups were compared regarding the operating time, surgical complications, and number of stapler cartridges used. The mean operating time was significantly shorter in the two-step method group than in the conventional method group [238 (179-293) min vs 272 (189-347) min, P < 0.01]. No postoperative death occurred in either group. There was no significant difference in the rate of complications [14 (21.9%) vs 13 (22.4%), P = 0.55] or mean number of stapler cartridges used [5 (4-6) vs 5.2 (5-6), P = 0.007] between the two groups. The two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy has the advantages of simple operation, minimal damage to the tubular stomach, and reduced use of stapler cartridges.

  7. Composting. Sludge Treatment and Disposal Course #166. Instructor's Guide [and] Student Workbook.

    ERIC Educational Resources Information Center

    Arasmith, E. E.

    Composting is a lesson developed for a sludge treatment and disposal course. The lesson discusses the basic theory of composting and the basic operation, in a step-by-step sequence, of the two typical composting procedures: windrow and forced air static pile. The lesson then covers basic monitoring and operational procedures. The instructor's…

  8. A Facile Two-Step Method to Implement N√ {iSWAP} and N√ {SWAP} Gates in a Circuit QED

    NASA Astrophysics Data System (ADS)

    Said, T.; Chouikh, A.; Bennai, M.

    2018-05-01

    We propose a way for implementing a two-step N√ {iSWAP} and N √ {SWAP} gates based on the qubit-qubit interaction with N superconducting qubits, by coupling them to a resonator driven by a strong microwave field. The operation times do not increase with the growth of the qubit number. Due to the virtual excitations of the resonator, the scheme is insensitive to the decay of the resonator. Numerical analysis shows that the scheme can be implemented with high fidelity. Moreover, we propose a detailed procedure and analyze the experimental feasibility. So, our proposal can be experimentally realized in the range of current circuit QED techniques.

  9. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    PubMed

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  10. Extraction of hyaluronic acid (HA) from rooster comb and characterization using flow field-flow fractionation (FlFFF) coupled with multiangle light scattering (MALS).

    PubMed

    Kang, Dong Young; Kim, Won-Suk; Heo, In Sook; Park, Young Hun; Lee, Seungho

    2010-11-01

    Hyaluronic acid (HA) was extracted in a relatively large scale from rooster comb using a method similar to that reported previously. The extraction method was modified to simplify and to reduce time and cost in order to accommodate a large-scale extraction. Five hundred grams of frozen rooster combs yielded about 500 mg of dried HA. Extracted HA was characterized using asymmetrical flow field-flow fractionation (AsFlFFF) coupled online to a multiangle light scattering detector and a refractive index detector to determine the molecular size, molecular weight (MW) distribution, and molecular conformation of HA. For characterization of HA, AsFlFFF was operated by a simplified two-step procedure, instead of the conventional three-step procedure, where the first two steps (sample loading and focusing) were combined into one to avoid the adsorption of viscous HA onto the channel membrane. The simplified two-step AsFlFFF yielded reasonably good separations of HA molecules based on their MWs. The weight average MW (M(w) ) and the average root-mean-square (RMS) radius of HA extracted from rooster comb were 1.20×10(6) and 94.7 nm, respectively. When the sample solution was filtered through a 0.45 μm disposable syringe filter, they were reduced down to 3.8×10(5) and 50.1 nm, respectively. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. 3D printing of versatile reactionware for chemical synthesis.

    PubMed

    Kitson, Philip J; Glatzel, Stefan; Chen, Wei; Lin, Chang-Gen; Song, Yu-Fei; Cronin, Leroy

    2016-05-01

    In recent decades, 3D printing (also known as additive manufacturing) techniques have moved beyond their traditional applications in the fields of industrial manufacturing and prototyping to increasingly find roles in scientific research contexts, such as synthetic chemistry. We present a general approach for the production of bespoke chemical reactors, termed reactionware, using two different approaches to extrusion-based 3D printing. This protocol describes the printing of an inert polypropylene (PP) architecture with the concurrent printing of soft material catalyst composites, using two different 3D printer setups. The steps of the PROCEDURE describe the design and preparation of a 3D digital model of the desired reactionware device and the preparation of this model for use with fused deposition modeling (FDM) type 3D printers. The protocol then further describes the preparation of composite catalyst-silicone materials for incorporation into the 3D-printed device and the steps required to fabricate a reactionware device. This combined approach allows versatility in the design and use of reactionware based on the specific needs of the experimental user. To illustrate this, we present a detailed procedure for the production of one such reactionware device that will result in the production of a sealed reactor capable of effecting a multistep organic synthesis. Depending on the design time of the 3D model, and including time for curing and drying of materials, this procedure can be completed in ∼3 d.

  12. Solution of the Average-Passage Equations for the Incompressible Flow through Multiple-Blade-Row Turbomachinery

    DTIC Science & Technology

    1994-02-01

    numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence

  13. Negative pressure irrigation and endoscopic necrosectomy through man-made sinus tract in infected necrotizing pancreatitis: a technical report.

    PubMed

    Tong, Zhihui; Ke, Lu; Li, Baiqiang; Li, Gang; Zhou, Jing; Shen, Xiao; Li, Weiqin; Li, Ning; Li, Jieshou

    2016-11-10

    In recent years, a step-up approach based on minimally invasive techniques was recommended by latest guidelines as initial invasive treatment for infected pancreatic necrosis (IPN). In this study, we aimed to describe a novel step-up approach for treating IPN consisting of four steps including negative pressure irrigation (NPI) and endoscopic necrosectomy (ED) as a bridge between percutaneous catheter drainage (PCD) and open necrosectomy METHODS: A retrospective review of a prospectively collected internal database of patients with a diagnosis of IPN between Jan, 2012 to Dec, 2012 at a single institution was performed. All patients underwent the same drainage strategy including four steps: PCD, NPI, ED and open necrosectomy. The demographic characteristics and clinical outcomes of study patients were analyzed. A total of 71 consecutive patients (48 males and 23 females) were included in the analysis. No significant procedure-related complication was observed and the overall mortality was +21.1 % (15 of 71 patients). Seven different strategies like PCD+ NPI, PCD+NPI+ED, PCD+open necrosectomy, etcetera, were applied in study patients and a half of them received PCD alone. In general, each patient underwent a median of 2 drainage procedures and the median total drainage duration was 11 days (interquartile range, 6-21days). This four-step approach is effective in treating IPN and adds no extra risk to patients when compared with other latest step-up strategies. The two novel techniques (NPI and ED) could offer distinct clinical benefits without posing unanticipated risks inherent to the procedures.

  14. How quantizable matter gravitates: A practitioner's guide

    NASA Astrophysics Data System (ADS)

    Schuller, Frederic P.; Witte, Christof

    2014-05-01

    We present the practical step-by-step procedure for constructing canonical gravitational dynamics and kinematics directly from any previously specified quantizable classical matter dynamics, and then illustrate the application of this recipe by way of two completely worked case studies. Following the same procedure, any phenomenological proposal for fundamental matter dynamics must be supplemented with a suitable gravity theory providing the coefficients and kinematical interpretation of the matter theory, before any of the two theories can be meaningfully compared to experimental data.

  15. One-Step Extraction and Hydrolysis of Flavonoid Glycosides in Rape Bee Pollen Based on Soxhlet-Assisted Matrix Solid Phase Dispersion.

    PubMed

    Tu, Xijuan; Ma, Shuangqin; Gao, Zhaosheng; Wang, Jing; Huang, Shaokang; Chen, Wenbin

    2017-11-01

    Flavonoids are frequently found as glycosylated derivatives in plant materials. To determine contents of flavonoid aglycones in these matrices, procedures for the extraction and hydrolysis of flavonoid glycosides are required. The current sample preparation method is both labour and time consuming. Develop a modified matrix solid phase dispersion (MSPD) procedure as an alternative methodology for the one-step extraction and hydrolysis of flavonoid glycosides. HPLC-DAD was applied for demonstrating the one-step extraction and hydrolysis of flavonoids in rape bee pollen. The obtained contents of flavonoid aglycones (quercetin, kaempferol, isorhamnetin) were used for the optimisation and validation of the method. The extraction and hydrolysis were accomplished in one step. The procedure completes in 2 h with silica gel as dispersant, a 1:2 ratio of sample to dispersant, and 60% aqueous ethanol with 0.3 M hydrochloric acid as the extraction solution. The relative standard deviations (RSDs) of repeatability were less than 5%, and the recoveries at two fortified levels were between 88.3 and 104.8%. The proposed methodology is simple and highly efficient, with good repeatability and recovery. Compared with currently available methods, the present work has advantages of using less time and labour, higher extraction efficiency, and less consumption of the acid catalyst. This method may have applications for the one-step extraction and hydrolysis of bioactive compounds from plant materials. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. [Application of virtual instrumentation technique in toxicological studies].

    PubMed

    Moczko, Jerzy A

    2005-01-01

    Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.

  17. Specific test and evaluation plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hays, W.H.

    1998-03-20

    The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made to the 241-AX-B Valve Pit by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation Plan (TEP). Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities), Factory Acceptance Tests (FATs), installation tests and inspections, Construction Acceptance Tests (CATs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), andmore » Operational Test Procedures (OTPs). It should be noted that POTPs are not required for testing of the transfer line addition. The STEP will be utilized in conjunction with the TEP for verification and validation.« less

  18. Student Perceptions of Self-Disclosure in the Classroom Based on Perceived Status Differentials.

    ERIC Educational Resources Information Center

    Klinger-Vartabedian, Laurel; O'Flaherty, Kathleen

    A study examined the extent to which perceived teacher status differentials influenced students' perceptions of teacher self-disclosure. This study involved a two-step procedure. Initially 30 students in a basic speech section were asked to recall examples of teacher self-disclosure that had occurred in classes in which they were enrolled. Then 13…

  19. Development and Validation of the Meaning of Work Inventory among French Workers

    ERIC Educational Resources Information Center

    Arnoux-Nicolas, Caroline; Sovet, Laurent; Lhotellier, Lin; Bernaud, Jean-Luc

    2017-01-01

    The purpose of this study was to validate a psychometric instrument among French workers for assessing the meaning of work. Following an empirical framework, a two-step procedure consisted of exploring and then validating the scale among distinctive samples. The consequent Meaning of Work Inventory is a 15-item scale based on a four-factor model,…

  20. Practices & Procedures of Mason Tending I & II. Instructor Manual. Trainee Manual.

    ERIC Educational Resources Information Center

    Laborers-AGC Education and Training Fund, Pomfret Center, CT.

    This packet consists of the instructor and trainee manuals for two courses: practices and procedures of mason tending I and II. The instructor manual for mason tending I contains a schedule for a 40-hour, 5-day course and instructor outline. The outline provides a step-by-step description of the instructor's activities and includes answer sheets…

  1. Rapid Two-Step Procedure for Large-Scale Purification of Pediocin-Like Bacteriocins and Other Cationic Antimicrobial Peptides from Complex Culture Medium

    PubMed Central

    Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar

    2002-01-01

    A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a low-pressure, reverse-phase column and the bacteriocins were detected as major optical density peaks upon elution with propanol. More than 80% of the activity that was initially in the culture supernatant was recovered in both purification steps, and the final bacteriocin preparation was more than 90% pure as judged by analytical reverse-phase chromatography and capillary electrophoresis. PMID:11823243

  2. Spacecraft crew procedures from paper to computers

    NASA Technical Reports Server (NTRS)

    Oneal, Michael; Manahan, Meera

    1993-01-01

    Large volumes of paper are launched with each Space Shuttle Mission that contain step-by-step instructions for various activities that are to be performed by the crew during the mission. These instructions include normal operational procedures and malfunction or contingency procedures and are collectively known as the Flight Data File (FDF). An example of nominal procedures would be those used in the deployment of a satellite from the Space Shuttle; a malfunction procedure would describe actions to be taken if a specific problem developed during the deployment. A new FDF and associated system is being created for Space Station Freedom. The system will be called the Space Station Flight Data File (SFDF). NASA has determined that the SFDF will be computer-based rather than paper-based. Various aspects of the SFDF are discussed.

  3. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  4. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  5. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  6. A novel fed-batch based strategy for enhancing cell-density and recombinant cyprosin B production in bioreactors.

    PubMed

    Sampaio, P N; Pais, M S; Fonseca, L P

    2014-12-01

    Nowadays, the dairy industry is continuously looking for new and more efficient clotting enzymes to create innovative products. Cyprosin B is a plant aspartic protease characterized by clotting activity that was previously cloned in Saccharomyces cerevisiae BJ1991 strain. The production of recombinant cyprosin B by a batch and fed-batch culture was compared using glucose and galactose as carbon sources. The strategy for fed-batch cultivation involved two steps: in the first batch phase, the culture medium presented glucose 1 % (w/v) and galactose 0.5 % (w/v), while in the feed step the culture medium was constituted by 5 % (w/v) galactose with the aim to minimize the GAL7 promoter repression. Based on fed-batch, in comparison to batch growth, an increase in biomass (6.6-fold), protein concentration (59 %) and cyprosin B activity (91 %) was achieved. The recombinant cyprosin B was purified by a single hydrophobic chromatography, presenting a specific activity of 6 × 10(4) U·mg(-1), corresponding to a purification degree of 12.5-fold and a recovery yield of 16.4 %. The SDS-PAGE analysis showed that recovery procedure is suitable for achieving the purified recombinant cyprosin B. The results show that the recombinant cyprosin B production can be improved based on two distinct steps during the fed-batch, presenting that this strategy, associated with a simplified purification procedure, could be applied to large-scale production, constituting a new and efficient alternative for animal and fungal enzymes widely used in cheese making.

  7. Project W-314 specific test and evaluation plan for AZ tank farm upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hays, W.H.

    1998-08-12

    The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made by the addition of the SN-631 transfer line from the AZ-O1A pit to the AZ-02A pit by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation P1 an (TEP). Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities, etc), Factory Tests and Inspections (FTIs), installation tests andmore » inspections, Construction Tests and Inspections (CTIs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), and Operational Test Procedures (OTPs). The STEP will be utilized in conjunction with the TEP for verification and validation.« less

  8. One-Step and Two-Step Facility Acquisition for Military Construction: Project Selection and Implementation Procedures

    DTIC Science & Technology

    1990-08-01

    the guidance in this report. 1-4. Scope This guidance covers selection of projects suitable for a One-Step or Two-Step approach, development of design...conducted, focus on resolving proposal deficiencies; prices are not "negotiated" in the common use of the term. A Request for Proposal (RFP) states project ...carefully examines experience and past performance in the design of similar projects and building types. Quality of

  9. Linear retrieval and global measurements of wind speed from the Seasat SMMR

    NASA Technical Reports Server (NTRS)

    Pandey, P. C.

    1983-01-01

    Retrievals of wind speed (WS) from Seasat Scanning Multichannel Microwave Radiometer (SMMR) were performed using a two-step statistical technique. Nine subsets of two to five SMMR channels were examined for wind speed retrieval. These subsets were derived by using a leaps and bound procedure based on the coefficient of determination selection criteria to a statistical data base of brightness temperatures and geophysical parameters. Analysis of Monsoon Experiment and ocean station PAPA data showed a strong correlation between sea surface temperature and water vapor. This relation was used in generating the statistical data base. Global maps of WS were produced for one and three month periods.

  10. Crafty Corner.

    ERIC Educational Resources Information Center

    Naturescope, 1986

    1986-01-01

    Presents step-by-step procedures for two arts and crafts lessons that focus on mammals. Directions are offered for making mammal-shaped dough magnets and also for creating mammal note cards. Examples of each are illustrated. (ML)

  11. Improved Feature Matching for Mobile Devices with IMU.

    PubMed

    Masiero, Andrea; Vettore, Antonio

    2016-08-05

    Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.

  12. Evaluation of a procedure for reducing vehicle-tree accidents.

    DOT National Transportation Integrated Search

    1987-01-01

    A procedure for reducing vehicle-tree accidents was evaluated. The procedure, developed by the Michigan Department of Transportation, consists of five steps: (1) preparing a base map and plotting roadway information, (2) assigning priorities for fiel...

  13. The Effects of Varying Levels of Treatment Integrity on Child Compliance during Treatment with a Three-Step Prompting Procedure

    ERIC Educational Resources Information Center

    Wilder, David A.; Atwell, Julie; Wine, Byron

    2006-01-01

    The effects of three levels of treatment integrity (100%, 50%, and 0%) on child compliance were evaluated in the context of the implementation of a three-step prompting procedure. Two typically developing preschool children participated in the study. After baseline data on compliance to one of three common demands were collected, a therapist…

  14. Detection and Characterization of Viral Species/Subspecies Using Isothermal Recombinase Polymerase Amplification (RPA) Assays.

    PubMed

    Glais, Laurent; Jacquot, Emmanuel

    2015-01-01

    Numerous molecular-based detection protocols include an amplification step of the targeted nucleic acids. This step is important to reach the expected sensitive detection of pathogens in diagnostic procedures. Amplifications of nucleic acid sequences are generally performed, in the presence of appropriate primers, using thermocyclers. However, the time requested to amplify molecular targets and the cost of the thermocycler machines could impair the use of these methods in routine diagnostics. Recombinase polymerase amplification (RPA) technique allows rapid (short-term incubation of sample and primers in an enzymatic mixture) and simple (isothermal) amplification of molecular targets. RPA protocol requires only basic molecular steps such as extraction procedures and agarose gel electrophoresis. Thus, RPA can be considered as an interesting alternative to standard molecular-based diagnostic tools. In this paper, the complete procedures to set up an RPA assay, applied to detection of RNA (Potato virus Y, Potyvirus) and DNA (Wheat dwarf virus, Mastrevirus) viruses, are described. The proposed procedure allows developing species- or subspecies-specific detection assay.

  15. Virus elimination during the purification of monoclonal antibodies by column chromatography and additional steps.

    PubMed

    Roberts, Peter L

    2014-01-01

    The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.

  16. An Efficient User Interface Design for Nursing Information System Based on Integrated Patient Order Information.

    PubMed

    Chu, Chia-Hui; Kuo, Ming-Chuan; Weng, Shu-Hui; Lee, Ting-Ting

    2016-01-01

    A user friendly interface can enhance the efficiency of data entry, which is crucial for building a complete database. In this study, two user interfaces (traditional pull-down menu vs. check boxes) are proposed and evaluated based on medical records with fever medication orders by measuring the time for data entry, steps for each data entry record, and the complete rate of each medical record. The result revealed that the time for data entry is reduced from 22.8 sec/record to 3.2 sec/record. The data entry procedures also have reduced from 9 steps in the traditional one to 3 steps in the new one. In addition, the completeness of medical records is increased from 20.2% to 98%. All these results indicate that the new user interface provides a more user friendly and efficient approach for data entry than the traditional interface.

  17. Two-Step Semi-Microscale Preparation of a Cinnamate Ester Sunscreen Analog

    ERIC Educational Resources Information Center

    Stabile, Ryan G.; Dicks, Andrew P.

    2004-01-01

    A student procedure focusing on multistep sunscreen synthesis and spectroscopic analysis is reported. A two-step synthetic pathway towards sunscreens, an analog of a commercially available UV light blocker is designed, given the current high profile nature of skin cancer and media attention towards sunscreens.

  18. A novel two-step method for screening shade tolerant mutant plants via dwarfism

    USDA-ARS?s Scientific Manuscript database

    When subjected to shade, plants undergo rapid shoot elongation, which often makes them more prone to disease and mechanical damage. It has been reported that, in turfgrass, induced dwarfism can enhance shade tolerance. Here, we describe a two-step procedure for isolating shade tolerant mutants of ...

  19. Review: A Position Paper on Selenium in Ecotoxicology: A Procedure for Deriving Site-Specific Water Quality Criteria

    Treesearch

    A. Dennis Lemly

    1997-01-01

    This paper describes a method for deriving site-specific water quality criteria for selenium using a two-step process: (1) gather information on selenium residues and biological effects at the site and in down-gradient systems and (2) examine criteria based on the degree of bioaccumulation, the relationship between mea-sured residues and threshold concentrations for...

  20. Save the last dance for me: unwanted serial position effects in jury evaluations.

    PubMed

    Bruine de Bruin, Wändi

    2005-03-01

    Whenever competing options are considered in sequence, their evaluations may be affected by order of appearance. Such serial position effects would threaten the fairness of competitions using jury evaluations. Randomization cannot reduce potential order effects, but it does give candidates an equal chance of being assigned to preferred serial positions. Whether, or what, serial position effects emerge may depend on the cognitive demands of the judgment task. In end-of-sequence procedures, final scores are not given until all candidates have performed, possibly burdening judges' memory. If judges' evaluations are based on how well they remember performances, serial position effects may resemble those found with free recall. Candidates may also be evaluated step-by-step, immediately after each performance. This procedure should not burden memory, though it may produce different serial position effects. Yet, this paper reports similar serial position effects with end-of-sequence and step-by-step procedures used for the Eurovision Song Contest: Ratings increased with serial position. The linear order effect was replicated in the step-by-step judgments of World and European Figure Skating Contests. It is proposed that, independent of the evaluation procedure, judges' initial impressions of sequentially appearing candidates may be formed step-by-step, yielding serial position effects.

  1. Strategy for alignment of electron beam trajectory in LEReC cooling section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Blaskiewicz, M.; Fedotov, A.

    2016-09-23

    We considered the steps required to align the electron beam trajectory through the LEReC cooling section. We devised a detailed procedure for the beam-based alignment of the cooling section solenoids. We showed that it is critical to have an individual control of each CS solenoid current. Finally, we modeled the alignment procedure and showed that with two BPM fitting the solenoid shift can be measured with 40 um accuracy and the solenoid inclination can be measured with 30 urad accuracy. These accuracies are well within the tolerances of the cooling section solenoid alignment.

  2. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  4. Floquet-Magnus expansion for general N-coupled spins systems in magic-angle spinning nuclear magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Mananga, Eugene Stephane; Charpentier, Thibault

    2015-04-01

    In this paper we present a theoretical perturbative approach for describing the NMR spectrum of strongly dipolar-coupled spin systems under fast magic-angle spinning. Our treatment is based on two approaches: the Floquet approach and the Floquet-Magnus expansion. The Floquet approach is well known in the NMR community as a perturbative approach to get analytical approximations. Numerical procedures are based on step-by-step numerical integration of the corresponding differential equations. The Floquet-Magnus expansion is a perturbative approach of the Floquet theory. Furthermore, we address the " γ -encoding" effect using the Floquet-Magnus expansion approach. We show that the average over " γ " angle can be performed for any Hamiltonian with γ symmetry.

  5. A derived heuristics based multi-objective optimization procedure for micro-grid scheduling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Deb, Kalyanmoy; Fang, Yanjun

    2017-06-01

    With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.

  6. 16 CFR 1610.6 - Test procedure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Test procedure. 1610.6 Section 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  7. 16 CFR 1610.6 - Test procedure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Test procedure. 1610.6 Section 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  8. 16 CFR 1610.6 - Test procedure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Test procedure. 1610.6 Section 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  9. GMP-based CD133+ cells isolation maintains progenitor angiogenic properties and enhances standardization in cardiovascular cell therapy

    PubMed Central

    Gaipa, Giuseppe; Tilenni, Manuela; Straino, Stefania; Burba, Ilaria; Zaccagnini, Germana; Belotti, Daniela; Biagi, Ettore; Valentini, Marco; Perseghin, Paolo; Parma, Matteo; Campli, Cristiana Di; Biondi, Andrea; Capogrossi, Maurizio C; Pompilio, Giulio; Pesce, Maurizio

    2010-01-01

    Abstract The aim of the present study was to develop and validate a good manufacturing practice (GMP) compliant procedure for the preparation of bone marrow (BM) derived CD133+ cells for cardiovascular repair. Starting from available laboratory protocols to purify CD133+ cells from human cord blood, we implemented these procedures in a GMP facility and applied quality control conditions defining purity, microbiological safety and vitality of CD133+ cells. Validation of CD133+ cells isolation and release process were performed according to a two-step experimental program comprising release quality checking (step 1) as well as ‘proofs of principle’ of their phenotypic integrity and biological function (step 2). This testing program was accomplished using in vitro culture assays and in vivo testing in an immunosuppressed mouse model of hindlimb ischemia. These criteria and procedures were successfully applied to GMP production of CD133+ cells from the BM for an ongoing clinical trial of autologous stem cells administration into patients with ischemic cardiomyopathy. Our results show that GMP implementation of currently available protocols for CD133+ cells selection is feasible and reproducible, and enables the production of cells having a full biological potential according to the most recent quality requirements by European Regulatory Agencies. PMID:19627397

  10. Risk analysis procedure for post-wildfire natural hazards in British Columbia

    NASA Astrophysics Data System (ADS)

    Jordan, Peter

    2010-05-01

    Following a severe wildfire season in 2003, and several subsequent damaging debris flow and flood events, the British Columbia Forest Service developed a procedure for analysing risks to public safety and infrastructure from such events. At the same time, the Forest Service undertook a research program to determine the extent of post-wildfire hazards, and examine the hydrologic and geomorphic processes contributing to the hazards. The risk analysis procedure follows the Canadian Standards Association decision-making framework for risk management (which in turn is based on international standards). This has several steps: identification of risk, risk analysis and estimation, evaluation of risk tolerability, developing control or mitigation strategies, and acting on these strategies. The Forest Service procedure deals only with the first two steps. The results are passed on to authorities such as the Provincial Emergency Program and local government, who are responsible for evaluating risks, warning residents, and applying mitigation strategies if appropriate. The objective of the procedure is to identify and analyse risks to public safety and infrastructure. The procedure is loosely based on the BAER (burned area emergency response) program in the USA, with some important differences. Our procedure focuses on identifying risks and warning affected parties, not on mitigation activities such as broadcast erosion control measures. Partly this is due to limited staff and financial resources. Also, our procedure is not multi-agency, but is limited to wildfires on provincial forest land; in British Columbia about 95% of forest land is in the publicly-owned provincial forest. Each fire season, wildfires are screened by size and proximity to values at risk such as populated areas. For selected fires, when the fire is largely contained, the procedure begins with an aerial reconnaissance of the fire, and photography with a hand-held camera, which can be used to make a preliminary map of vegetation burn severity if desired. The next steps include mapping catchment boundaries, field traverses to collect data on soil burn severity and water repellency, identification of unstable hillslopes and channels, and inspection of values at risk from hazards such as debris flows or flooding. BARC (burned area reflectance classification) maps based on satellite imagery are prepared for some fires, although these are typically not available for several weeks. Our objective is to make a preliminary risk analysis report available about two weeks after the fire is contained. If high risks to public safety or infrastructure are identified, the risk analysis reports may make recommendations for mitigation measures to be considered; however, acting on these recommendations is the responsibility of local land managers, local government, or landowners. Mitigation measures for some fires have included engineering treatments to reduce the hydrologic impact of logging roads, protective structures such as dykes or berms, and straw mulching to reduce runoff and erosion on severely burned areas. The Terrace Mountain Fire, with burned 9000 hectares in the Okanagan Valley in 2009, is used as an example of the application of the procedure.

  11. Rapid non-enzymatic extraction method for isolating PCR-quality camelpox virus DNA from skin.

    PubMed

    Yousif, A Ausama; Al-Naeem, A Abdelmohsen; Al-Ali, M Ahmad

    2010-10-01

    Molecular diagnostic investigations of orthopoxvirus (OPV) infections are performed using a variety of clinical samples including skin lesions, tissues from internal organs, blood and secretions. Skin samples are particularly convenient for rapid diagnosis and molecular epidemiological investigations of camelpox virus (CMLV). Classical extraction procedures and commercial spin-column-based kits are time consuming, relatively expensive, and require multiple extraction and purification steps in addition to proteinase K digestion. A rapid non-enzymatic procedure for extracting CMLV DNA from dried scabs or pox lesions was developed to overcome some of the limitations of the available DNA extraction techniques. The procedure requires as little as 10mg of tissue and produces highly purified DNA [OD(260)/OD(280) ratios between 1.47 and 1.79] with concentrations ranging from 6.5 to 16 microg/ml. The extracted CMLV DNA was proven suitable for virus-specific qualitative and, semi-quantitative PCR applications. Compared to spin-column and conventional viral DNA extraction techniques, the two-step extraction procedure saves money and time, and retains the potential for automation without compromising CMLV PCR sensitivity. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  12. Integrated HPTLC-based Methodology for the Tracing of Bioactive Compounds in Herbal Extracts Employing Multivariate Chemometrics. A Case Study on Morus alba.

    PubMed

    Chaita, Eliza; Gikas, Evagelos; Aligiannis, Nektarios

    2017-03-01

    In drug discovery, bioassay-guided isolation is a well-established procedure, and still the basic approach for the discovery of natural products with desired biological properties. However, in these procedures, the most laborious and time-consuming step is the isolation of the bioactive constituents. A prior identification of the compounds that contribute to the demonstrated activity of the fractions would enable the selection of proper chromatographic techniques and lead to targeted isolation. The development of an integrated HPTLC-based methodology for the rapid tracing of the bioactive compounds during bioassay-guided processes, using multivariate statistics. Materials and Methods - The methanol extract of Morus alba was fractionated employing CPC. Subsequently, fractions were assayed for tyrosinase inhibition and analyzed with HPTLC. PLS-R algorithm was performed in order to correlate the analytical data with the biological response of the fractions and identify the compounds with the highest contribution. Two methodologies were developed for the generation of the dataset; one based on manual peak picking and the second based on chromatogram binning. Results and Discussion - Both methodologies afforded comparable results and were able to trace the bioactive constituents (e.g. oxyresveratrol, trans-dihydromorin, 2,4,3'-trihydroxydihydrostilbene). The suggested compounds were compared in terms of R f values and UV spectra with compounds isolated from M. alba using typical bioassay-guided process. Chemometric tools supported the development of a novel HPTLC-based methodology for the tracing of tyrosinase inhibitors in M. alba extract. All steps of the experimental procedure implemented techniques that afford essential key elements for application in high-throughput screening procedures for drug discovery purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Modeling of body tissues for Monte Carlo simulation of radiotherapy treatments planned with conventional x-ray CT systems

    NASA Astrophysics Data System (ADS)

    Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru

    2016-07-01

    In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.

  14. Characterizing the Experimental Procedure in Science Laboratories: A preliminary step towards students experimental design

    NASA Astrophysics Data System (ADS)

    Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire

    2012-04-01

    Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to describe the characteristics of a procedure in science laboratories, in an educational context. As a starting point, this paper proposes a model in the form of a hierarchical task diagram that gives the general structure of any procedure. This model allows both the analysis of existing procedures and the design of a new inquiry-based approach. The obtained characteristics are further organized into criteria that can help both teachers and students assess a procedure during and after its writing. These results are obtained through two different sets of data. First, the characteristics of procedures are established by analysing laboratory manuals. This allows the organization and type of information in procedures to be defined. This analysis reveals that students are seldom asked to write a full procedure, but sometimes have to specify tasks within a procedure. Secondly, iterative interviews are undertaken with teachers. This leads to the list of criteria to evaluate the procedure.

  15. Two-step purification method of vitellogenin from three teleost fish species: rainbow trout (Oncorhynchus mykiss), gudgeon (Gobio gobio) and chub (Leuciscus cephalus).

    PubMed

    Brion, F; Rogerieux, F; Noury, P; Migeon, B; Flammarion, P; Thybaud, E; Porcher, J M

    2000-01-14

    A two-step purification protocol was developed to purify rainbow trout (Oncorhynchus mykiss) vitellogenin (Vtg) and was successfully applied to Vtg of chub (Leuciscus cephalus) and gudgeon (Gobio gobio). Capture and intermediate purification were performed by anion-exchange chromatography on a Resource Q column and a polishing step was performed by gel permeation chromatography on Superdex 200 column. This method is a rapid two-step purification procedure that gave a pure solution of Vtg as assessed by silver staining electrophoresis and immunochemical characterisation.

  16. Adaptation of instructional materials: a commentary on the research on adaptations of Who Polluted the Potomac

    NASA Astrophysics Data System (ADS)

    Ercikan, Kadriye; Alper, Naim

    2009-03-01

    This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they may apply to adaptation of instructional materials. The authors provide a good theoretical analysis of what took place in two translation instances and make an important contribution by taking the first step in providing a systematic discussion of adaptation of instructional materials. Our discussion proposes procedures for adapting instructional materials for examining equivalence of source and target versions of adapted instructional materials. We highlight that many of the procedures and criteria used in examining comparability of educational tests is missing in this emerging research of area.

  17. Two-step purification of His-tagged Nef protein in native condition using heparin and immobilized metal ion affinity chromatographies.

    PubMed

    Finzi, Andrés; Cloutier, Jonathan; Cohen, Eric A

    2003-07-01

    The Nef protein encoded by human immunodeficiency virus type 1 (HIV-1) has been shown to be an important factor of progression of viral growth and pathogenesis in both in vitro and in vivo. The lack of a simple procedure to purify Nef in its native conformation has limited molecular studies on Nef function. A two-step procedure that includes heparin and immobilized metal ion affinity chromatographies (IMACs) was developed to purify His-tagged Nef (His(6)-Nef) expressed in bacteria in native condition. During the elaboration of this purification procedure, we identified two closely SDS-PAGE-migrating contaminating bacterial proteins, SlyD and GCHI, that co-eluted with His(6)-Nef in IMAC in denaturing condition and developed purification steps to eliminate these contaminants in native condition. Overall, this study describes a protocol that allows rapid purification of His(6)-Nef protein expressed in bacteria in native condition and that removes metal affinity resin-binding bacterial proteins that can contaminate recombinant His-tagged protein preparation.

  18. A two-step hierarchical hypothesis set testing framework, with applications to gene expression data on ordered categories

    PubMed Central

    2014-01-01

    Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138

  19. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  20. Sealing properties of one-step root-filling fibre post-obturators vs. two-step delayed fibre post-placement.

    PubMed

    Monticelli, Francesca; Osorio, Raquel; Toledano, Manuel; Ferrari, Marco; Pashley, David H; Tay, Franklin R

    2010-07-01

    The sealing properties of a one-step obturation post-placement technique consisting of Resilon-capped fibre post-obturators were compared with a two-step technique based on initial Resilon root filling following by 24h-delayed fibre post-placement. Thirty root segments were shaped to size 40, 0.04 taper and filled with: (1) InnoEndo obturators; (2) Resilon/24h-delayed FibreKor post-cementation. Obturator, root filling and post-cementation procedures were performed using InnoEndo bonding agent/dual-cured root canal sealer. Fluid flow rate through the filled roots was evaluated at 10psi using a computerised fluid filtration model before root resection and after 3 and 9mm apical resections. Fluid flow data were analysed using two-way repeated measures ANOVA and Tukey test to examine the effects of root-filling post-placement techniques and root resection lengths on fluid leakage from the filled canals (alpha=0.05). A significantly greater amount of fluid leakage was observed with the one-step technique when compared with two-step technique. No difference in fluid leakage was observed among intact canals and canals resected at different lengths for both materials. The seal of root canals achieved with the one-step obturator is less effective than separate Resilon root fillings followed by a 24-h delay prior to the fibre post-placement. Incomplete setting of the sealer and restricted relief of polymerisation shrinkage stresses may be responsible for the inferior seal of the one-step root-filling/post-restoration technique. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. An optimized two-step derivatization method for analyzing diethylene glycol ozonation products using gas chromatography and mass spectrometry.

    PubMed

    Yu, Ran; Duan, Lei; Jiang, Jingkun; Hao, Jiming

    2017-03-01

    The ozonation of hydroxyl compounds (e.g., sugars and alcohols) gives a broad range of products such as alcohols, aldehydes, ketones, and carboxylic acids. This study developed and optimized a two-step derivatization procedure for analyzing polar products of aldehydes and carboxylic acids from the ozonation of diethylene glycol (DEG) in a non-aqueous environment using gas chromatography-mass spectrometry. Experiments based on Central Composite Design with response surface methodology were carried out to evaluate the effects of derivatization variables and their interactions on the analysis. The most desirable derivatization conditions were reported, i.e., oximation was performed at room temperature overnight with the o-(2,3,4,5,6-pentafluorobenzyl) hydroxyl amine to analyte molar ratio of 6, silylation reaction temperature of 70°C, reaction duration of 70min, and N,O-bis(trimethylsilyl)-trifluoroacetamide volume of 12.5μL. The applicability of this optimized procedure was verified by analyzing DEG ozonation products in an ultrafine condensation particle counter simulation system. Copyright © 2016. Published by Elsevier B.V.

  2. Precision-engineering the Pseudomonas aeruginosa genome with two-step allelic exchange

    PubMed Central

    Hmelo, Laura R.; Borlee, Bradley R.; Almblad, Henrik; Love, Michelle E.; Randall, Trevor E.; Tseng, Boo Shan; Lin, Chuyang; Irie, Yasuhiko; Storek, Kelly M.; Yang, Jaeun Jane; Siehnel, Richard J.; Howell, P. Lynne; Singh, Pradeep K.; Tolker-Nielsen, Tim; Parsek, Matthew R.; Schweizer, Herbert P.; Harrison, Joe J.

    2016-01-01

    Allelic exchange is an efficient method of bacterial genome engineering. This protocol describes the use of this technique to make gene knockouts and knockins, as well as single nucleotide insertions, deletions and substitutions in Pseudomonas aeruginosa. Unlike other approaches to allelic exchange, this protocol does not require heterologous recombinases to insert or excise selective markers from the target chromosome. Rather, positive and negative selection are enabled solely by suicide vector-encoded functions and host cell proteins. Here, mutant alleles, which are flanked by regions of homology to the recipient chromosome, are synthesized in vitro and then cloned into allelic exchange vectors using standard procedures. These suicide vectors are then introduced into recipient cells by conjugation. Homologous recombination then results in antibiotic resistant single-crossover mutants in which the plasmid has integrated site-specifically into the chromosome. Subsequently, unmarked double-crossover mutants are isolated directly using sucrose-mediated counter-selection. This two-step process yields seamless mutations that are precise to a single base pair of DNA. The entire procedure requires ~2 weeks. PMID:26492139

  3. Comparisons of node-based and element-based approaches of assigning bone material properties onto subject-specific finite element models.

    PubMed

    Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F

    2015-08-01

    Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  5. TEACH-M: A pilot study evaluating an instructional sequence for persons with impaired memory and executive functions.

    PubMed

    Ehlhardt, L A; Sohlberg, M M; Glang, A; Albin, R

    2005-08-10

    The purpose of this pilot study was to evaluate an instructional package that facilitates learning and retention of multi-step procedures for persons with severe memory and executive function impairments resulting from traumatic brain injury. The study used a multiple baseline across participants design. Four participants, two males and two females, ranging in age from 36-58 years, were taught a 7-step e-mail task. The instructional package (TEACH-M) was the experimental intervention and the number of correct e-mail steps learned was the dependent variable. Treatment effects were replicated across the four participants and maintained at 30 days post-treatment. Generalization and social validity data further supported the treatment programme. The results suggest that individuals with severe cognitive impairments are capable of learning new skills. Directions for future research include application of the instructional package to other multi-step procedures.

  6. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  7. Comparison of effects of dry versus wet swallowing on Eustachian tube function via a nine-step inflation/deflation test.

    PubMed

    Adali, M Kemal; Uzun, Cem

    2005-09-01

    The aim of the present study is to evaluate the effect of swallowing type (dry versus wet) on the outcome of a nine-step inflation/deflation tympanometric Eustachian tube function (ETF) test in healthy adults. Fourteen normal healthy volunteers, between 19 and 28 years of age, were included in the study. The nine-step test was performed in two different test procedures: (1) test with dry swallows (dry test procedure) and (2) test with liquid swallows (wet test procedure). If the equilibration of middle-ear (ME) pressure was successful in all the steps of the nine-step test, ETF was considered 'Good'. Otherwise, the test was considered 'Poor', and the test was repeated at a second session. In the dry test procedure, ETF was 'Good' in 21 ears at the first session and in 24 ears after the second session (p > 0.05). However, in the wet test procedure, ETF was 'Good' in 13 ears at the first session and in 21 ears after the second session (p < 0.05). At the first session, ETF was 'Good' in 21 and 13 ears in the dry and wet test procedures, respectively. The difference was statistically significant (p < 0.05). However, after the second session, the overall number of ears with 'Good' tubal function was almost the same in both test procedures (24 ears at dry test procedures versus 21 ears at wet test procedures;p > 0.05). Dry swallowing seems to be more effective for the equilibration of ME pressure. Thus, a single-session dependent evaluation of ETF may be efficient for the dry test procedure of the nine-step test. Swallowing with water may be easier for subjects, but a repetition of the test at a second session may be necessary when the test result is 'Poor'.

  8. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  10. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  11. A new integrated instrumental approach to autonomic nervous system assessment.

    PubMed

    Corazza, I; Barletta, G; Guaraldi, P; Cecere, A; Calandra-Buonaura, G; Altini, E; Zannoli, R; Cortelli, P

    2014-11-01

    The autonomic nervous system (ANS) regulates involuntary body functions and is commonly evaluated by measuring reflex responses of systolic and diastolic blood pressure (BP) and heart rate (HR) to physiological and pharmacological stimuli. However, BP and HR values may not sufficient be to explain specific ANS events and other parameters like the electrocardiogram (ECG), BP waves, the respiratory rate and the electroencephalogram (EEG) are mandatory. Although ANS behaviour and its response to stimuli are well-known, their clinical evaluation is often based on individual medical training and experience. As a result, ANS laboratories have been customized, making it impossible to standardize procedures and share results with colleagues. The aim of our study was to build a powerful versatile instrument easy-to-use in clinical practice to standardize procedures and allow a cross-analysis of all the parameters of interest for ANS evaluation. The new ANScovery System developed by neurologists and technicians is a two-step device: (1) integrating physiological information from different already existing commercial modules, making it possible to cross-analyse, store and share data; (2) standardizing procedures by an innovative tutor monitor able to guide the patient throughout ANS testing. The daily use of the new ANScovery System in clinical practice has proved it is a versatile easy to use instrument. Standardization of the manoeuvres and step-by-step guidance throughout the procedure avoid repetitions and allow intra and inter-patient data comparison. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Preparation of a Nile Red-Pd-based fluorescent CO probe and its imaging applications in vitro and in vivo.

    PubMed

    Liu, Keyin; Kong, Xiuqi; Ma, Yanyan; Lin, Weiying

    2018-05-01

    Carbon monoxide (CO) is a key gaseous signaling molecule in living cells and organisms. This protocol illustrates the synthesis of a highly sensitive Nile Red (NR)-Pd-based fluorescent probe, NR-PdA, and its applications for detecting endogenous CO in tissue culture cells, ex vivo organs, and zebrafish embryos. In the NR-PdA synthesis process, 3-diethylamine phenol reacts with sodium nitrite in the acidic condition to afford 5-(diethylamino)-2-nitrosophenol hydrochloride (compound 1), which is further treated with 1-naphthalenol at a high temperature to provide the NR dye via a cyclization reaction. Finally, NR is reacted with palladium acetate to obtain the desired Pd-based fluorescent probe NR-PdA. NR-PdA possesses excellent two-photon excitation and near-IR emission properties, high stability, low background fluorescence, and a low detection limit. In addition to the chemical synthesis procedures, we provide step-by-step procedures for imaging endogenous CO in RAW 264.7 cells, mouse organs ex vivo, and live zebrafish embryos. The synthesis process for the probe requires ∼4 d, and the biological imaging experiments take ∼14 d.

  13. A Novel Methodology for the Synthesis of Acyloxy Castor Polyol Esters: Low Pour Point Lubricant Base Stocks.

    PubMed

    Kamalakar, Kotte; Mahesh, Goli; Prasad, Rachapudi B N; Karuna, Mallampalli S L

    2015-01-01

    Castor oil, a non-edible oil containing hydroxyl fatty acid, ricinoleic acid (89.3 %) was chemically modified employing a two step procedure. The first step involved acylation (C(2)-C(6) alkanoic anhydrides) of -OH functionality employing a green catalyst, Kieselguhr-G and solvent free medium. The catalyst after reaction was filtered and reused several times without loss in activity. The second step is esterification of acylated castor fatty acids with branched mono alcohol, 2-ethylhexanol and polyols namely neopentyl glycol (NPG), trimethylolpropane (TMP) and pentaerythritol (PE) to obtain 16 novel base stocks. The base stocks when evaluated for different lubricant properties have shown very low pour points (-30 to -45°C) and broad viscosity ranges 20.27 cSt to 370.73 cSt, higher viscosity indices (144-171), good thermal and oxidative stabilities, and high weld load capacities suitable for multi-range industrial applications such as hydraulic fluids, metal working fluids, gear oil, forging and aviation applications. The study revealed that acylated branched mono- and polyol esters rich in monounsaturation is desirable for developing low pour point base stocks.

  14. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  15. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  16. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  17. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  18. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  19. Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.

    PubMed

    Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin

    2018-06-22

    Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.

  20. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2015-01-01

    A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.

  1. Determination of Total Carbohydrates in Algal Biomass: Laboratory Analytical Procedure (LAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Wychen, Stefanie; Laurens, Lieve M. L.

    This procedure uses two-step sulfuric acid hydrolysis to hydrolyze the polymeric forms of carbohydrates in algal biomass into monomeric subunits. The monomers are then quantified by either HPLC or a suitable spectrophotometric method.

  2. Mobile magnetic particles as solid-supports for rapid surface-based bioanalysis in continuous flow.

    PubMed

    Peyman, Sally A; Iles, Alexander; Pamme, Nicole

    2009-11-07

    An extremely versatile microfluidic device is demonstrated in which multi-step (bio)chemical procedures can be performed in continuous flow. The system operates by generating several co-laminar flow streams, which contain reagents for specific (bio)reactions across a rectangular reaction chamber. Functionalized magnetic microparticles are employed as mobile solid-supports and are pulled from one side of the reaction chamber to the other by use of an external magnetic field. As the particles traverse the co-laminar reagent streams, binding and washing steps are performed on their surface in one operation in continuous flow. The applicability of the platform was first demonstrated by performing a proof-of-principle binding assay between streptavidin coated magnetic particles and biotin in free solution with a limit of detection of 20 ng mL(-1) of free biotin. The system was then applied to a mouse IgG sandwich immunoassay as a first example of a process involving two binding steps and two washing steps, all performed within 60 s, a fraction of the time required for conventional testing.

  3. Effective Field Theory on Manifolds with Boundary

    NASA Astrophysics Data System (ADS)

    Albert, Benjamin I.

    In the monograph Renormalization and Effective Field Theory, Costello made two major advances in rigorous quantum field theory. Firstly, he gave an inductive position space renormalization procedure for constructing an effective field theory that is based on heat kernel regularization of the propagator. Secondly, he gave a rigorous formulation of quantum gauge theory within effective field theory that makes use of the BV formalism. In this work, we extend Costello's renormalization procedure to a class of manifolds with boundary and make preliminary steps towards extending his formulation of gauge theory to manifolds with boundary. In addition, we reorganize the presentation of the preexisting material, filling in details and strengthening the results.

  4. 16 CFR § 1610.6 - Test procedure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Test procedure. § 1610.6 Section § 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  5. Use of Low-Fidelity Simulation Laboratory Training for Teaching Radiology Residents CT-Guided Procedures.

    PubMed

    Picard, Melissa; Nelson, Rachel; Roebel, John; Collins, Heather; Anderson, M Bret

    2016-11-01

    To determine the benefit of the addition of low-fidelity simulation-based training to the standard didactic-based training in teaching radiology residents common CT-guided procedures. This was a prospective study involving 24 radiology residents across all years in a university program. All residents underwent standard didactic lecture followed by low-fidelity simulation-based training on three common CT-guided procedures: random liver biopsy, lung nodule biopsy, and drain placement. Baseline knowledge, confidence, and performance assessments were obtained after the didactic session and before the simulation training session. Approximately 2 months later, all residents participated in a simulation-based training session covering all three of these procedures. Knowledge, confidence, and performance data were obtained afterward. These assessments covered topics related to preprocedure workup, intraprocedure steps, and postprocedure management. Knowledge data were collected based on a 15-question assessment. Confidence data were obtained based on a 5-point Likert-like scale. Performance data were obtained based on successful completion of predefined critical steps. There was significant improvement in knowledge (P = .005), confidence (P < .008), and tested performance (P < .043) after the addition of simulation-based training to the standard didactic curriculum for all procedures. This study suggests that the addition of low-fidelity simulation-based training to a standard didactic-based curriculum is beneficial in improving resident knowledge, confidence, and tested performance of common CT-guided procedures. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  6. Preprocessing and Analysis of LC-MS-Based Proteomic Data

    PubMed Central

    Tsai, Tsung-Heng; Wang, Minkun; Ressom, Habtom W.

    2016-01-01

    Liquid chromatography coupled with mass spectrometry (LC-MS) has been widely used for profiling protein expression levels. This chapter is focused on LC-MS data preprocessing, which is a crucial step in the analysis of LC-MS based proteomics. We provide a high-level overview, highlight associated challenges, and present a step-by-step example for analysis of data from LC-MS based untargeted proteomic study. Furthermore, key procedures and relevant issues with the subsequent analysis by multiple reaction monitoring (MRM) are discussed. PMID:26519169

  7. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    PubMed

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-10-30

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.

  8. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning

    PubMed Central

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-01-01

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984

  9. Implementation and evaluation of a dilation and evacuation simulation training curriculum.

    PubMed

    York, Sloane L; McGaghie, William C; Kiley, Jessica; Hammond, Cassing

    2016-06-01

    To evaluate obstetrics and gynecology resident physicians' performance following a simulation curriculum on dilation and evacuation (D&E) procedures. This study included two phases: simulation curriculum development and resident physician performance evaluation following training on a D&E simulator. Trainees participated in two evaluations. Simulation training evaluated participants performing six cases on a D&E simulator, measuring procedural time and a 26-step checklist of D&E steps. The operative training portion evaluated residents' performance after training on the simulator using mastery learning techniques. Intra-operative evaluation was based on a 21-step checklist score, Objective Structured Assessment of Technical Skills (OSATS), and percentage of cases completed. Twenty-two residents participated in simulation training, demonstrating improved performance from cases one and two to cases five and six, as measured by checklist score and procedural time (p<.001 and p=.001, respectively). Of 10 participants in the operative training, all performed at least three D&Es, while seven performed at least six cases. While checklist scores did not change significantly from the first to sixth case (mean for first case: 18.3; for sixth case: 19.6; p=.593), OSATS ratings improved from case one (19.7) to case three (23.5; p=.001) and to case six (26.8; p=.005). Trainees completed approximately 71.6% of their first case (range: 21.4-100%). By case six, the six participants performed 81.2% of the case (range: 14.3-100%). D&E simulation using a newly-developed uterine model and simulation curriculum improves resident technical skills. Simulation training with mastery learning techniques transferred to high level of performance in OR using checklist. The OSATS measured skills and showed improvement in performance with subsequent cases. Implementation of a D&E simulation curriculum offers potential for improved surgical training and abortion provision. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The effect of different exercise protocols and regression-based algorithms on the assessment of the anaerobic threshold.

    PubMed

    Zuniga, Jorge M; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O

    2014-09-01

    The purpose of this study was to examine the effect of ramp and step incremental cycle ergometer tests on the assessment of the anaerobic threshold (AT) using 3 different computerized regression-based algorithms. Thirteen healthy adults (mean age and body mass [SD] = 23.4 [3.3] years and body mass = 71.7 [11.1] kg) visited the laboratory on separate occasions. Two-way repeated measures analyses of variance with appropriate follow-up procedures were used to analyze the data. The step protocol resulted in greater mean values across algorithms than the ramp protocol for the V[Combining Dot Above]O2 (step = 1.7 [0.6] L·min and ramp = 1.5 [0.4] L·min) and heart rate (HR) (step = 133 [21] b·min and ramp = 124 [15] b·min) at the AT. There were no significant mean differences, however, in power outputs at the AT between the step (115.2 [44.3] W) and the ramp (112.2 [31.2] W) protocols. Furthermore, there were no significant mean differences for V[Combining Dot Above]O2, HR, or power output across protocols among the 3 computerized regression-based algorithms used to estimate the AT. The current findings suggested that the protocol selection, but not the regression-based algorithms can affect the assessment of the V[Combining Dot Above]O2 and HR at the AT.

  11. 32 CFR 644.409 - Procedures for Interchange of National Forest Lands.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Procedures for Interchange of National Forest... Interests § 644.409 Procedures for Interchange of National Forest Lands. (a) General. The interchange of national forest lands is accomplished in three steps: first, agreement must be reached between the two...

  12. Two-step tunneling technique of deep brain stimulation extension wires-a description.

    PubMed

    Fontaine, Denys; Vandersteen, Clair; Saleh, Christian; von Langsdorff, Daniel; Poissonnet, Gilles

    2013-12-01

    While a significant body of literature exists on the intracranial part of deep brain stimulation surgery, the equally important second part of the intervention related to the subcutaneous tunneling of deep brain stimulation extension wires is rarely described. The tunneling strategy can consist of a single passage of the extension wires from the frontal incision site to the subclavicular area, or of a two-step approach that adds a retro-auricular counter-incision. Each technique harbors the risk of intraoperative and postoperative complications. At our center, we perform a two-step tunneling procedure that we developed based on a cadaveric study. In 125 consecutive patients operated since 2002, we did not encounter any complication related to our tunneling method. Insufficient data exist to fully evaluate the advantages and disadvantages of each tunneling technique. It is of critical importance that authors detail their tunneling modus operandi and report the presence or absence of complications. This gathered data pool may help to formulate a definitive conclusions on the safest method for subcutaneous tunneling of extension wires in deep brain stimulation.

  13. Improving patient safety during insertion of peripheral venous catheters: an observational intervention study.

    PubMed

    Kampf, Günter; Reise, Gesche; James, Claudia; Gittelbauer, Kirsten; Gosch, Jutta; Alpers, Birgit

    2013-01-01

    Peripheral venous catheters are frequently used in hospitalized patients but increase the risk of nosocomial bloodstream infection. Evidence-based guidelines describe specific steps that are known to reduce infection risk. However, the degree of guideline implementation in clinical practice is not known. The aim of this study was to determine the use of specific steps for insertion of peripheral venous catheters in clinical practice and to implement a multimodal intervention aimed at improving both compliance and the optimum order of the steps. The study was conducted at University Hospital Hamburg. An optimum procedure for inserting a peripheral venous catheter was defined based on three evidence-based guidelines (WHO, CDC, RKI) including five steps with 1A or 1B level of evidence: hand disinfection before patient contact, skin antisepsis of the puncture site, no palpation of treated puncture site, hand disinfection before aseptic procedure, and sterile dressing on the puncture site. A research nurse observed and recorded procedures for peripheral venous catheter insertion for healthcare workers in four different departments (endoscopy, central emergency admissions, pediatrics, and dermatology). A multimodal intervention with 5 elements was established (teaching session, dummy training, e-learning tool, tablet and poster, and direct feedback), followed by a second observation period. During the last observation week, participants evaluated the intervention. In the control period, 207 insertions were observed, and 202 in the intervention period. Compliance improved significantly for four of five steps (e.g., from 11.6% to 57.9% for hand disinfection before patient contact; p<0.001, chi-square test). Compliance with skin antisepsis of the puncture site was high before and after intervention (99.5% before and 99.0% after). Performance of specific steps in the correct order also improved (e.g., from 7.7% to 68.6% when three of five steps were done; p<0.001). The intervention was described as helpful by 46.8% of the participants, as neutral by 46.8%, and as disruptive by 6.4%. A multimodal strategy to improve both compliance with safety steps for peripheral venous catheter insertion and performance of an optimum procedure was effective and was regarded helpful by healthcare workers.

  14. Project W-314 specific test and evaluation plan for transfer line SN-633 (241-AX-B to 241-AY-02A)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hays, W.H.

    1998-03-20

    The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made by the addition of the SN-633 transfer line by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation Plan (TEP). This STEP encompasses all testing activities required to demonstrate compliance to the project design criteria as it relates to the addition of transfer line SN-633. The Project Design Specificationsmore » (PDS) identify the specific testing activities required for the Project. Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities), Factory Acceptance Tests (FATs), installation tests and inspections, Construction Acceptance Tests (CATs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), and Operational Test Procedures (OTPs). It should be noted that POTPs are not required for testing of the transfer line addition. The STEP will be utilized in conjunction with the TEP for verification and validation.« less

  15. A METHOD FOR DETERMINING THE COMPATIBILITY OF HAZARDOUS WASTES

    EPA Science Inventory

    This report describes a method for determining the compatibility of the binary combinations of hazardous wastes. The method consists of two main parts, namely: (1) the step-by-step compatibility analysis procedures, and (2) the hazardous wastes compatibility chart. The key elemen...

  16. Terminal-Area Aircraft Intent Inference Approach Based on Online Trajectory Clustering.

    PubMed

    Yang, Yang; Zhang, Jun; Cai, Kai-quan

    2015-01-01

    Terminal-area aircraft intent inference (T-AII) is a prerequisite to detect and avoid potential aircraft conflict in the terminal airspace. T-AII challenges the state-of-the-art AII approaches due to the uncertainties of air traffic situation, in particular due to the undefined flight routes and frequent maneuvers. In this paper, a novel T-AII approach is introduced to address the limitations by solving the problem with two steps that are intent modeling and intent inference. In the modeling step, an online trajectory clustering procedure is designed for recognizing the real-time available routes in replacing of the missed plan routes. In the inference step, we then present a probabilistic T-AII approach based on the multiple flight attributes to improve the inference performance in maneuvering scenarios. The proposed approach is validated with real radar trajectory and flight attributes data of 34 days collected from Chengdu terminal area in China. Preliminary results show the efficacy of the presented approach.

  17. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  18. 3D-fabrication of tunable and high-density arrays of crystalline silicon nanostructures

    NASA Astrophysics Data System (ADS)

    Wilbers, J. G. E.; Berenschot, J. W.; Tiggelaar, R. M.; Dogan, T.; Sugimura, K.; van der Wiel, W. G.; Gardeniers, J. G. E.; Tas, N. R.

    2018-04-01

    In this report, a procedure for the 3D-nanofabrication of ordered, high-density arrays of crystalline silicon nanostructures is described. Two nanolithography methods were utilized for the fabrication of the nanostructure array, viz. displacement Talbot lithography (DTL) and edge lithography (EL). DTL is employed to perform two (orthogonal) resist-patterning steps to pattern a thin Si3N4 layer. The resulting patterned double layer serves as an etch mask for all further etching steps for the fabrication of ordered arrays of silicon nanostructures. The arrays are made by means of anisotropic wet etching of silicon in combination with an isotropic retraction etch step of the etch mask, i.e. EL. The procedure enables fabrication of nanostructures with dimensions below 15 nm and a potential density of 1010 crystals cm-2.

  19. Prediction of protein structural classes by recurrence quantification analysis based on chaos game representation.

    PubMed

    Yang, Jian-Yi; Peng, Zhen-Ling; Yu, Zu-Guo; Zhang, Rui-Jie; Anh, Vo; Wang, Desheng

    2009-04-21

    In this paper, we intend to predict protein structural classes (alpha, beta, alpha+beta, or alpha/beta) for low-homology data sets. Two data sets were used widely, 1189 (containing 1092 proteins) and 25PDB (containing 1673 proteins) with sequence homology being 40% and 25%, respectively. We propose to decompose the chaos game representation of proteins into two kinds of time series. Then, a novel and powerful nonlinear analysis technique, recurrence quantification analysis (RQA), is applied to analyze these time series. For a given protein sequence, a total of 16 characteristic parameters can be calculated with RQA, which are treated as feature representation of protein sequences. Based on such feature representation, the structural class for each protein is predicted with Fisher's linear discriminant algorithm. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies with step-by-step procedure are 65.8% and 64.2% for 1189 and 25PDB data sets, respectively. With one-against-others procedure used widely, we compare our method with five other existing methods. Especially, the overall accuracies of our method are 6.3% and 4.1% higher for the two data sets, respectively. Furthermore, only 16 parameters are used in our method, which is less than that used by other methods. This suggests that the current method may play a complementary role to the existing methods and is promising to perform the prediction of protein structural classes.

  20. Mechanochemical synthesis and intercalation of Ca(II)Fe(III)-layered double hydroxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferencz, Zs.; Szabados, M.; Varga, G.

    2016-01-15

    A mechanochemical method (grinding the components without added water – dry grinding, followed by further grinding in the presence of minute amount of water or NaOH solution – wet grinding) was used in this work for the preparation and intercalation of CaFe-layered double hydroxides (LDHs). Both the pristine LDHs and the amino acid anion (cystinate and tyrosinate) intercalated varieties were prepared by the two-step grinding procedure in a mixer mill. By systematically changing the conditions of the preparation method, a set of parameters could be determined, which led to the formation of close to phase-pure LDH. The optimisation procedure wasmore » also applied for the intercalation processes of the amino acid anions. The resulting materials were structurally characterised by a range of methods (X-ray diffractometry, scanning electron microscopy, energy dispersive analysis, thermogravimetry, X-ray absorption and infra-red spectroscopies). It was proven that this simple mechanochemical procedure was able to produce complex organic–inorganic nanocomposites: LDHs intercalated with amino acid anions. - Graphical abstract: Amino acid anion-Ca(II)Fe(III)-LDHs were successfully prepared by a two-step milling procedure. - Highlights: • Synthesis of pristine and amino acid intercalated CaFe-LDHs by two-step milling. • Identifying the optimum synthesis and intercalation parameters. • Characterisation of the samples with a range of instrumental methods.« less

  1. Comparison of patency and cost-effectiveness of self-expandable metal and plastic stents used for malignant biliary strictures: a Polish single-center study.

    PubMed

    Budzyńska, Agnieszka; Nowakowska-Duława, Ewa; Marek, Tomasz; Hartleb, Marek

    2016-10-01

    Most patients with malignant biliary obstruction are suited only for palliation by endoscopic drainage with plastic stents (PS) or self-expandable metal stents (SEMS). To compare the clinical outcome and costs of biliary stenting with SEMS and PS in patients with malignant biliary strictures. A total of 114 patients with malignant jaundice who underwent 376 endoscopic retrograde biliary drainage (ERBD) were studied. ERBD with the placement of PS was performed in 80 patients, with one-step SEMS in 20 patients and two-step SEMS in 14 patients. Significantly fewer ERBD interventions were performed in patients with one-step SEMS than PS or the two-step SEMS technique (2.0±1.12 vs. 3.1±1.7 or 5.7±2.1, respectively, P<0.0001). The median hospitalization duration per procedure was similar for the three groups of patients. The patients' survival time was the longest in the two-step SEMS group in comparison with the one-step SEMS and PS groups (596±270 vs. 276±141 or 208±219 days, P<0.001). Overall median time to recurrent biliary obstruction was 89.3±159 days for PS and 120.6±101 days for SEMS (P=0.01). The total cost of hospitalization with ERBD was higher for two-step SEMS than for one-step SEMS or PS (1448±312, 1152±135 and 977±156&OV0556;, P<0.0001). However, the estimated annual cost of medical care for one-step SEMS was higher than that for the two-step SEMS or PS groups (4618, 4079, and 3995&OV0556;, respectively). Biliary decompression by SEMS is associated with longer patency and reduced number of auxiliary procedures; however, repeated PS insertions still remain the most cost-effective strategy.

  2. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  3. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  4. A twin purification/enrichment procedure based on two versatile solid/liquid extracting agents for efficient uptake of ultra-trace levels of lorazepam and clonazepam from complex bio-matrices.

    PubMed

    Hemmati, Maryam; Rajabi, Maryam; Asghari, Alireza

    2017-11-17

    In this research work, two consecutive dispersive solid/liquid phase microextractions based on efficient extraction media were developed for the influential and clean pre-concentration of clonazepam and lorazepam from complicated bio-samples. The magnetism nature of the proposed nanoadsorbent proceeded the clean-up step conveniently and swiftly (∼5min), pursued by a further enrichment via a highly effective and rapid emulsification microextraction process (∼4min) based on a deep eutectic solvent (DES). Finally, the instrumental analysis step was practicable via high performance liquid chromatography-ultraviolet detection. The solid phase used was an adequate magnetic nanocomposite termed as polythiophene-sodium dodecyl benzene sulfonate/iron oxide (PTh-DBSNa/Fe 3 O 4 ), easily and cost-effectively prepared by the impressive co-precipitation method followed by the efficient in situ sonochemical oxidative polymerization approach. The identification techniques viz. FESEM, XRD, and EDX certified the supreme physico-chemical properties of this effective nanosorbent. Also the powerful liquid extraction agent, DES, based on bio-degradable choline chloride, possessed a high efficiency, tolerable safety, low cost, and facile and mild synthesis route. The parameters involved in this versatile hyphenated procedure, efficiently evaluated via the central composite design (CCD), showed that the best extraction conditions consisted of an initial pH value of 7.2, 17mg of the PTh-DBSNa/Fe 3 O 4 nanocomposite, 20 air-agitation cycles (first step), 245μL of methanol, 250μL of DES, 440μL of THF, and 8 air-agitation cycles (second step). Under the optimal conditions, the understudied drugs could be accurately determined in the wide linear dynamic ranges (LDRs) of 4.0-3000ngmL -1 and 2.0-2000ngmL -1 for clonazepam and lorazepam, respectively, with low limits of detection (LODs) ranged from 0.7 to 1.0ngmL -1 . The enrichment factor (EF) and percentage extraction recovery (%ER) values were found to be 75 and 57% for clonazepam and 56 and 42% for lorazepam at the spiked level of 75.0ngmL -1 , possessing proper repeatabilities (relative standard deviation values (RSDs) below 5.9%, n=3). These valid analytical features provided quite accurate drug analyses at therapeutically low spans and levels below potentially toxic domains, implying a proper purification/enrichment of the proposed microextraction procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Numerical modeling and optimization of the Iguassu gas centrifuge

    NASA Astrophysics Data System (ADS)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  6. Porous silicon carbide (SIC) semiconductor device

    NASA Technical Reports Server (NTRS)

    Shor, Joseph S. (Inventor); Kurtz, Anthony D. (Inventor)

    1996-01-01

    Porous silicon carbide is fabricated according to techniques which result in a significant portion of nanocrystallites within the material in a sub 10 nanometer regime. There is described techniques for passivating porous silicon carbide which result in the fabrication of optoelectronic devices which exhibit brighter blue luminescence and exhibit improved qualities. Based on certain of the techniques described porous silicon carbide is used as a sacrificial layer for the patterning of silicon carbide. Porous silicon carbide is then removed from the bulk substrate by oxidation and other methods. The techniques described employ a two-step process which is used to pattern bulk silicon carbide where selected areas of the wafer are then made porous and then the porous layer is subsequently removed. The process to form porous silicon carbide exhibits dopant selectivity and a two-step etching procedure is implemented for silicon carbide multilayers.

  7. A Step-by-Step Design Methodology for a Base Case Vanadium Redox-Flow Battery

    ERIC Educational Resources Information Center

    Moore, Mark; Counce, Robert M.; Watson, Jack S.; Zawodzinski, Thomas A.; Kamath, Haresh

    2012-01-01

    The purpose of this work is to develop an evolutionary procedure to be used by Chemical Engineering students for the base-case design of a Vanadium Redox-Flow Battery. The design methodology is based on the work of Douglas (1985) and provides a profitability analysis at each decision level so that more profitable alternatives and directions can be…

  8. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    PubMed

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  9. Global Properties of Fully Convective Accretion Disks from Local Simulations

    NASA Astrophysics Data System (ADS)

    Bodo, G.; Cattaneo, F.; Mignone, A.; Ponzo, F.; Rossi, P.

    2015-08-01

    We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction is analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.

  10. Seven-Step Problem-Based Learning in an Interaction Design Course

    ERIC Educational Resources Information Center

    Schultz, Nette; Christensen, Hans Peter

    2004-01-01

    The objective in this paper is the implementation of the highly structured seven-step problem-based learning (PBL) procedure as part of the learning process in a human-computer interaction (HCI) design course at the Technical University of Denmark, taking into account the common learning processes in PBL and the interaction design process. These…

  11. Determination of cadmium in sediments by diluted HCI extraction and isotope dilution ICP-MS.

    PubMed

    Terán-Baamonde, Javier; Soto-Ferreiro, Rosa-María; Carlosena, Alatzne; Andrade, José-Manuel; Prada, Darío

    2018-08-15

    Isotope dilution ICP-MS is proposed to measure the mass fraction of Cd extracted by diluted HCl in marine sediments, using a fast and simple extraction procedure based on ultrasonic probe agitation. The 111 Cd isotope was added before the extraction to achieve isotope equilibration with native Cd solubilized from the sample. The parameters affecting trueness and precision of isotope ratio measurements were evaluated carefully and subsequently corrected in order to minimize errors; they were: detector dead time, spectral interferences, mass discrimination factor and optimum sample/spike ratio. The mass fraction of Cd extracted was compared with the sum of the certified contents of the three steps of the sequential extraction procedure of the Standards, Measurements and Testing Programme (SM&T) analysing the BCR 701 sediment to validate the method. The certified and measured values agreed, giving a measured / certified mass fraction ratio of 1.05. Further, the extraction procedure itself was studied by adding the enriched isotope after the extraction step, which allowed verifying that analyte losses occurred during this process. Two additional reference sediments with certified total cadmium contents were also analysed. The method provided very good precision (0.9%, RSD) and a low detection limit, 1.8 ng g -1 . The procedural uncertainty budget was estimated following the EURACHEM Guide by means of the 'GUM Workbench' software, obtaining a relative expanded uncertainty of 1.5%. The procedure was applied to determine the bioaccessible mass fraction of Cd in sediments from two environmentally and economically important areas of Galicia (rias of Arousa and Vigo, NW of Spain). Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Variability in source sediment contributions by applying different statistic test for a Pyrenean catchment.

    PubMed

    Palazón, L; Navas, A

    2017-06-01

    Information on sediment contribution and transport dynamics from the contributing catchments is needed to develop management plans to tackle environmental problems related with effects of fine sediment as reservoir siltation. In this respect, the fingerprinting technique is an indirect technique known to be valuable and effective for sediment source identification in river catchments. Large variability in sediment delivery was found in previous studies in the Barasona catchment (1509 km 2 , Central Spanish Pyrenees). Simulation results with SWAT and fingerprinting approaches identified badlands and agricultural uses as the main contributors to sediment supply in the reservoir. In this study the <63 μm sediment fraction from the surface reservoir sediments (2 cm) are investigated following the fingerprinting procedure to assess how the use of different statistical procedures affects the amounts of source contributions. Three optimum composite fingerprints were selected to discriminate between source contributions based in land uses/land covers from the same dataset by the application of (1) discriminant function analysis; and its combination (as second step) with (2) Kruskal-Wallis H-test and (3) principal components analysis. Source contribution results were different between assessed options with the greatest differences observed for option using #3, including the two step process: principal components analysis and discriminant function analysis. The characteristics of the solutions by the applied mixing model and the conceptual understanding of the catchment showed that the most reliable solution was achieved using #2, the two step process of Kruskal-Wallis H-test and discriminant function analysis. The assessment showed the importance of the statistical procedure used to define the optimum composite fingerprint for sediment fingerprinting applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A hybrid image fusion system for endovascular interventions of peripheral artery disease.

    PubMed

    Lalys, Florent; Favre, Ketty; Villena, Alexandre; Durrmann, Vincent; Colleaux, Mathieu; Lucas, Antoine; Kaladji, Adrien

    2018-07-01

    Interventional endovascular treatment has become the first line of management in the treatment of peripheral artery disease (PAD). However, contrast and radiation exposure continue to limit the feasibility of these procedures. This paper presents a novel hybrid image fusion system for endovascular intervention of PAD. We present two different roadmapping methods from intra- and pre-interventional imaging that can be used either simultaneously or independently, constituting the navigation system. The navigation system is decomposed into several steps that can be entirely integrated within the procedure workflow without modifying it to benefit from the roadmapping. First, a 2D panorama of the entire peripheral artery system is automatically created based on a sequence of stepping fluoroscopic images acquired during the intra-interventional diagnosis phase. During the interventional phase, the live image can be synchronized on the panorama to form the basis of the image fusion system. Two types of augmented information are then integrated. First, an angiography panorama is proposed to avoid contrast media re-injection. Information exploiting the pre-interventional computed tomography angiography (CTA) is also brought to the surgeon by means of semiautomatic 3D/2D registration on the 2D panorama. Each step of the workflow was independently validated. Experiments for both the 2D panorama creation and the synchronization processes showed very accurate results (errors of 1.24 and [Formula: see text] mm, respectively), similarly to the registration on the 3D CTA (errors of [Formula: see text] mm), with minimal user interaction and very low computation time. First results of an on-going clinical study highlighted its major clinical added value on intraoperative parameters. No image fusion system has been proposed yet for endovascular procedures of PAD in lower extremities. More globally, such a navigation system, combining image fusion from different 2D and 3D image sources, is novel in the field of endovascular procedures.

  14. Computer Based Procedures for Field Workers - FY16 Research Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven jobmore » aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to commercialize INL’s CBP system.« less

  15. Computer based interpretation of infrared spectra-structure of the knowledge-base, automatic rule generation and interpretation

    NASA Astrophysics Data System (ADS)

    Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.

    1995-04-01

    It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.

  16. Enrichment of human bone marrow aspirates for low-density mononuclear cells using a haemonetics discontinuous blood cell separator.

    PubMed

    Raijmakers, R; de Witte, T; Koekman, E; Wessels, J; Haanen, C

    1986-01-01

    Isopycnic density floatation centrifugation has been proven to be a suitable technique to enrich bone marrow aspirates for clonogenic cells on a small scale. We have tested a Haemonetics semicontinuous blood cell separator in order to process large volumes of bone marrow with minimal bone marrow manipulation. The efficacy of isopycnic density floatation was tested in a one and a two-step procedure. Both procedures showed a recovery of about 20% of the nucleated cells and 1-2% of the erythrocytes. The enrichment of clonogenic cells in the one-step procedure appeared superior to the two-step enrichment, first separating buffy coat cells. The recovery of clonogenic cells was 70 and 50%, respectively. Repopulation capacity of the low-density cell fraction containing the clonogenic cells was excellent after autologous reinfusion (6 cases) and allogeneic bone marrow transplantation (3 cases). Fast enrichment of large volumes of bone marrow aspirates with low-density cells containing the clonogenic cells by isopycnic density floatation centrifugation can be done safely using a Haemonetics blood cell separator.

  17. Searching regional rainfall homogeneity using atmospheric fields

    NASA Astrophysics Data System (ADS)

    Gabriele, Salvatore; Chiaravalloti, Francesco

    2013-03-01

    The correct identification of homogeneous areas in regional rainfall frequency analysis is fundamental to ensure the best selection of the probability distribution and the regional model which produce low bias and low root mean square error of quantiles estimation. In an attempt at rainfall spatial homogeneity, the paper explores a new approach that is based on meteo-climatic information. The results are verified ex-post using standard homogeneity tests applied to the annual maximum daily rainfall series. The first step of the proposed procedure selects two different types of homogeneous large regions: convective macro-regions, which contain high values of the Convective Available Potential Energy index, normally associated with convective rainfall events, and stratiform macro-regions, which are characterized by low values of the Q vector Divergence index, associated with dynamic instability and stratiform precipitation. These macro-regions are identified using Hot Spot Analysis to emphasize clusters of extreme values of the indexes. In the second step, inside each identified macro-region, homogeneous sub-regions are found using kriging interpolation on the mean direction of the Vertically Integrated Moisture Flux. To check the proposed procedure, two detailed examples of homogeneous sub-regions are examined.

  18. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  19. MIRADS-2 Implementation Manual

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.

  20. Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization

    PubMed Central

    Melchor, Juan; Muñoz, Rafael; Rus, Guillermo

    2017-01-01

    Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353

  1. Transformer miniaturization for transcutaneous current/voltage pulse applications.

    PubMed

    Kolen, P T

    1999-05-01

    A general procedure for the design of a miniaturized step up transformer to be used in the context of surface electrode based current/voltage pulse generation is presented. It has been shown that the optimum secondary current pulse width is 4.5 tau, where tau is the time constant associated with the pulse forming network associated with the transformer/electrode interaction. This criteria has been shown to produce the highest peak to average current ratio for the secondary current pulse. The design procedure allows for the calculation of the optimum turns ratio, primary turns, and secondary turns for a given electrode load/tissue and magnetic core parameters. Two design examples for transformer optimization are presented.

  2. Computational Phenotyping in Psychiatry: A Worked Example

    PubMed Central

    2016-01-01

    Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087

  3. Computational Phenotyping in Psychiatry: A Worked Example.

    PubMed

    Schwartenbeck, Philipp; Friston, Karl

    2016-01-01

    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.

  4. Platelet-rich plasma differs according to preparation method and human variability.

    PubMed

    Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Cote, Mark P; Romeo, Anthony A; Bradley, James P; Arciero, Robert A; Beitzel, Knut

    2012-02-15

    Varying concentrations of blood components in platelet-rich plasma preparations may contribute to the variable results seen in recently published clinical studies. The purposes of this investigation were (1) to quantify the level of platelets, growth factors, red blood cells, and white blood cells in so-called one-step (clinically used commercial devices) and two-step separation systems and (2) to determine the influence of three separate blood draws on the resulting components of platelet-rich plasma. Three different platelet-rich plasma (PRP) separation methods (on blood samples from eight subjects with a mean age [and standard deviation] of 31.6 ± 10.9 years) were used: two single-spin processes (PRPLP and PRPHP) and a double-spin process (PRPDS) were evaluated for concentrations of platelets, red and white blood cells, and growth factors. Additionally, the effect of three repetitive blood draws on platelet-rich plasma components was evaluated. The content and concentrations of platelets, white blood cells, and growth factors for each method of separation differed significantly. All separation techniques resulted in a significant increase in platelet concentration compared with native blood. Platelet and white blood-cell concentrations of the PRPHP procedure were significantly higher than platelet and white blood-cell concentrations produced by the so-called single-step PRPLP and the so-called two-step PRPDS procedures, although significant differences between PRPLP and PRPDS were not observed. Comparing the results of the three blood draws with regard to the reliability of platelet number and cell counts, wide variations of intra-individual numbers were observed. Single-step procedures are capable of producing sufficient amounts of platelets for clinical usage. Within the evaluated procedures, platelet numbers and numbers of white blood cells differ significantly. The intra-individual results of platelet-rich plasma separations showed wide variations in platelet and cell numbers as well as levels of growth factors regardless of separation method.

  5. Practical implementation of the double linear damage rule and damage curve approach for treating cumulative fatigue damage

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1981-01-01

    Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.

  6. Step 6: Does Not Routinely Employ Practices, Procedures Unsupported by Scientific Evidence

    PubMed Central

    Goer, Henci; Sagady Leslie, Mayri; Romano, Amy

    2007-01-01

    Step 6 of the Ten Steps of Mother-Friendly Care addresses two issues: 1) the routine use of interventions (shaving, enemas, intravenous drips, withholding food and fluids, early rupture of membranes, and continuous electronic fetal monitoring; and 2) the optimal rates of induction, episiotomy, cesareans, and vaginal births after cesarean. Rationales for compliance and systematic reviews are presented. PMID:18523680

  7. Neuromuscular disease classification system

    NASA Astrophysics Data System (ADS)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  8. An Assessment Program Designed To Improve Communication Instruction through a Competency-Based Core Curriculum.

    ERIC Educational Resources Information Center

    Aitken, Joan E.; Neer, Michael R.

    This paper provides an example procedure used to design and install a program of assessment to improve communication instruction through a competency-based core curriculum at a mid-sized, urban university. The paper models the various steps in the process, and includes specific tests, forms, memos, course description, sources, and procedures which…

  9. Integrating Program Theory and Systems-Based Procedures in Program Evaluation: A Dynamic Approach to Evaluate Educational Programs

    ERIC Educational Resources Information Center

    Grammatikopoulos, Vasilis

    2012-01-01

    The current study attempts to integrate parts of program theory and systems-based procedures in educational program evaluation. The educational program that was implemented, called the "Early Steps" project, proposed that physical education can contribute to various educational goals apart from the usual motor skills improvement. Basic…

  10. A variation-perturbation method for atomic and molecular interactions. I - Theory. II - The interaction potential and van der Waals molecule for Ne-HF

    NASA Astrophysics Data System (ADS)

    Gallup, G. A.; Gerratt, J.

    1985-09-01

    The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.

  11. Band Structure Simulations of the Photoinduced Changes in the MgB₂:Cr Films.

    PubMed

    Kityk, Iwan V; Fedorchuk, Anatolii O; Ozga, Katarzyna; AlZayed, Nasser S

    2015-04-02

    An approach for description of the photoinduced nonlinear optical effects in the superconducting MgB₂:Cr₂O₃ nanocrystalline film is proposed. It includes the molecular dynamics step-by-step optimization of the two separate crystalline phases. The principal role for the photoinduced nonlinear optical properties plays nanointerface between the two phases. The first modified layers possess a form of slightly modified perfect crystalline structure. The next layer is added to the perfect crystalline structure and the iteration procedure is repeated for the next layer. The total energy here is considered as a varied parameter. To avoid potential jumps on the borders we have carried out additional derivative procedure.

  12. Surgical anatomy of the supracarinal esophagus based on a minimally invasive approach: vascular and nervous anatomy and technical steps to resection and lymphadenectomy.

    PubMed

    Cuesta, Miguel A; van der Wielen, Nicole; Weijs, Teus J; Bleys, Ronald L A W; Gisbertz, Suzanne S; van Duijvendijk, Peter; van Hillegersberg, Richard; Ruurda, Jelle P; van Berge Henegouwen, Mark I; Straatman, Jennifer; Osugi, Harushi; van der Peet, Donald L

    2017-04-01

    During esophageal dissection and lymphadenectomy of the upper mediastinum by thoracoscopy in prone position, we observed a complex anatomy in which we had to resect the esophagus, dissect vessels and nerves, and take down some of these in order to perform a complete lymphadenectomy. In order to improve the quality of the dissection and standardization of the procedure, we describe the surgical anatomy and steps involved in this procedure. We retrospectively evaluated twenty consecutive and unedited videos of thoracoscopic esophageal resections. We recorded the vascular anatomy of the supracarinal esophagus, lymph node stations and the steps taken in this procedure. The resulting concept was validated in a prospective study including five patients. Seventy percent of patients in the retrospective study had one right bronchial artery (RBA) and two left bronchial arteries (LBA). The RBA was divided at both sides of the esophagus in 18 patients, with preservation of one LBA or at least one esophageal branch in all cases. Both recurrent laryngeal nerves were identified in 18 patients. All patients in the prospective study had one RBA and two LBA, and in four patients the RBA was divided at both sides of the esophagus and preserved one of the LBA. Lymphadenectomy was performed of stations 4R, 4L, 2R and 2L, with a median of 11 resected lymph nodes. Both recurrent laryngeal nerves were identified in four patients. In three patients, only the left recurrent nerve could be identified. Two patients showed palsy of the left recurrent laryngeal nerve, and one showed neuropraxia of the left vocal cord. Knowledge of the surgical anatomy of the upper mediastinum and its anatomical variations is important for standardization of an adequate esophageal resection and paratracheal lymphadenectomy with preservation of any vascularization of the trachea, bronchi and the recurrent laryngeal nerves.

  13. Building America's Industrial Revolution: The Boott Cotton Mills of Lowell, Massachusetts. Teaching with Historic Places.

    ERIC Educational Resources Information Center

    Stowell, Stephen

    1995-01-01

    Presents a high school unit about the U.S. Industrial Revolution featuring the Boott Cotton Mills of Lowell, Massachusetts. Includes student objectives, step-by-step instructional procedures, and discussion questions. Provides two maps, five illustrations, one photograph, and three student readings. (ACM)

  14. Improving government regulations: a guidebook for conservation and renewable energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neese, R. J.; Scheer, R. M.; Marasco, A. L.

    1981-04-01

    An integrated view of the Office of Conservation and Solar Energy (CS) policy making encompassing both administrative procedures and policy analysis is presented. Chapter One very briefly sketches each step in the development of a significant regulation, noting important requirements and participants. Chapter Two expands upon the Overview, providing the details of the process, the rationale and source of requirements, concurrence procedures, and advice on the timing and synchronization of steps. Chapter Three explains the types of analysis documents that may be required for a program. Regulatory Analyses, Environmental Impact Statements, Urban and Community Impact Analyses, and Regulatory Flexibility Analysesmore » are all discussed. Specific information to be included in the documents and the circumstances under which the documents need to be prepared are explained. Chapter Four is a step-by-step discussion of how to do good analysis. Use of models and data bases is discussed. Policy objectives, alternatives, and decision making are explained. In Chapter five guidance is provided on identifying the public that would most likely be interested in the regulation, involving its constituents in a dialogue with CS, evaluating and handling comments, and engineering the final response. Chapter Six provides direction on planning the evaluation, monitoring the regulation's success once it has been promulgated, and allowing for constructive support or criticism from outside DOE. (MCW)« less

  15. Validity evidence for procedural competency in virtual reality robotic simulation, establishing a credible pass/fail standard for the vaginal cuff closure procedure.

    PubMed

    Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg

    2018-03-30

    The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future proficiency-based training.

  16. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  17. Treating the Tough Adolescent: A Family-Based, Step-by-Step Guide. The Guilford Family Therapy Series.

    ERIC Educational Resources Information Center

    Sells, Scott P.

    A model for treating difficult adolescents and their families is presented. Part 1 offers six basic assumptions about the causes of severe behavioral problems and presents the treatment model with guidelines necessary to address each of these six causes. Case examples highlight and clarify major points within each of the 15 procedural steps of the…

  18. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  19. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  20. Strategy Guideline. Proper Water Heater Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoeschele, M.; Springer, D.; German, A.

    2015-04-09

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  1. Strategy Guideline: Proper Water Heater Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoeschele, M.; Springer, D.; German, A.

    2015-04-01

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  2. CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas

    2015-09-01

    Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less

  3. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  4. Data Processing for Atmospheric Phase Interferometers

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.; Nessel, James A.; Morabito, David D.

    2009-01-01

    This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.

  5. DETERMINATION OF PESTICIDES AND PCB'S IN INDUSTRIAL AND MUNICIPAL WASTEWATERS

    EPA Science Inventory

    Steps in the procedure for the analysis of 25 chlorinated pesticides and polychlorinated biphenyls were studied. Two gas chromatographic columns and two detectors (electron capture and Hall electrolytic conductivity) were evaluated. Extractions were performed with two solvents (d...

  6. An innovative implementation of LCA within the EIA procedure: Lessons learned from two Wastewater Treatment Plant case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larrey-Lassalle, Pyrène, E-mail: pyrene.larrey-lassalle@irstea.fr; LGEI, Ecole des mines d'Alès, 6 avenue de Clavières, 30319 Alès Cedex; Catel, Laureline

    Life Cycle Assessment (LCA) has been identified in the literature as a promising tool to increase the performance of environmental assessments at different steps in the Environmental Impact Assessment (EIA) procedure. However, few publications have proposed a methodology for an extensive integration, and none have compared the results with existing EIA conclusions without LCA. This paper proposes a comprehensive operational methodology for implementing an LCA within an EIA. Based on a literature review, we identified four EIA steps that could theoretically benefit from LCA implementation, i.e., (a) the environmental comparison of alternatives, (b) the identification of key impacts, (c) themore » impact assessment, and (d) the impact of mitigation measures. For each of these steps, an LCA was implemented with specific goal and scope definitions that resulted in a specific set of indicators. This approach has been implemented in two contrasting Wastewater Treatment Plant (WWTP) projects and compared to existing EIA studies. The results showed that the two procedures, i.e., EIAs with or without inputs from LCA, led to different conclusions. The environmental assessments of alternatives and mitigation measures were not carried out in the original studies and showed that other less polluting technologies could have been chosen. Regarding the scoping step, the selected environmental concerns were essentially different. Global impacts such as climate change or natural resource depletion were not taken into account in the original EIA studies. Impacts other than those occurring on the project site (off-site impacts) were not assessed, either. All these impacts can be significant compared to those initially considered. On the other hand, unlike current LCA applications, EIAs usually address natural and technological risks and neighbourhood disturbances such as noises or odours, which are very important for the public acceptability of projects. Regarding the impact assessment, even if the conclusions of the EIAs with or without LCA were partially common for local on-site impacts, LCA gives crucial additional information on global and off-site impacts and highlights the processes responsible for them. Finally, for all EIA steps investigated, interest in LCA was demonstrated for both WWTP case studies. The feasibility in terms of skills, time and cost of such implementation has also been assessed. - Highlights: • An innovative methodology for a first-stage implementation of LCA in EIA is proposed. • Its applicability is demonstrated on two Wastewater Treatment Plant case studies. • The conclusions for the four EIA steps investigated differ with or without LCA. • LCA provides valuable additional information on 1) global and 2) off-site impacts. • LCA identifies pollution transfers towards a life cycle perspective.« less

  7. Treatment of congential vascular disorders: classification, step program, and therapeutic procedures

    NASA Astrophysics Data System (ADS)

    Philipp, Carsten M.; Poetke, Margitta; Engel-Murke, Frank; Waldschmidt, J.; Berlien, Hans-Peter

    1994-02-01

    Because of the different step programs concerning the preoperative diagnostic and the onset of therapy for the various types of congenital vascular disorders (CVD) a clear classification is important. One has to discern the vascular malformations, including the port wine stain, from the real hemangiomas which are vascular tumors. As former classification, mostly based on histological findings, showed little evidence to a clinical step program, we developed a descriptive classification which allows an early differentiation between the two groups of CVD. In most cases this can be done by a precise medical history of the onset and development of the disorder, a close look to the clinical signs and by Duplex-Ultrasound and MRI-diagnostic. With this protocol and the case adapted use of different lasers and laser techniques we have not seen any severe complications as skin necrosis or nerve lesions.

  8. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  9. The Fibrin slide assay for detecting urokinase activity in human fetal kidney cells

    NASA Technical Reports Server (NTRS)

    Sedor, K.

    1985-01-01

    The Fibrin Slide Technique of Hau C. Kwaan and Tage Astrup is discussed. This relatively simple assay involves two steps: the formation of an artificial clot and then the addition of an enzyme (UKOKINASE) to dissolve the clot. The actual dissolving away of the clot is detected by the appearance of holes (lysis zones) in the stained clot. The procedure of Kwaan and Astrup is repeated, along with modifications and suggestions for improvements based on experience with the technique.

  10. A marker-free system for the analysis of movement disabilities.

    PubMed

    Legrand, L; Marzani, F; Dusserre, L

    1998-01-01

    A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.

  11. Method of fabricating porous silicon carbide (SiC)

    NASA Technical Reports Server (NTRS)

    Shor, Joseph S. (Inventor); Kurtz, Anthony D. (Inventor)

    1995-01-01

    Porous silicon carbide is fabricated according to techniques which result in a significant portion of nanocrystallites within the material in a sub 10 nanometer regime. There is described techniques for passivating porous silicon carbide which result in the fabrication of optoelectronic devices which exhibit brighter blue luminescence and exhibit improved qualities. Based on certain of the techniques described porous silicon carbide is used as a sacrificial layer for the patterning of silicon carbide. Porous silicon carbide is then removed from the bulk substrate by oxidation and other methods. The techniques described employ a two-step process which is used to pattern bulk silicon carbide where selected areas of the wafer are then made porous and then the porous layer is subsequently removed. The process to form porous silicon carbide exhibits dopant selectivity and a two-step etching procedure is implemented for silicon carbide multilayers.

  12. Improving patient safety during insertion of peripheral venous catheters: an observational intervention study

    PubMed Central

    Kampf, Günter; Reise, Gesche; James, Claudia; Gittelbauer, Kirsten; Gosch, Jutta; Alpers, Birgit

    2013-01-01

    Background: Peripheral venous catheters are frequently used in hospitalized patients but increase the risk of nosocomial bloodstream infection. Evidence-based guidelines describe specific steps that are known to reduce infection risk. However, the degree of guideline implementation in clinical practice is not known. The aim of this study was to determine the use of specific steps for insertion of peripheral venous catheters in clinical practice and to implement a multimodal intervention aimed at improving both compliance and the optimum order of the steps. Methods: The study was conducted at University Hospital Hamburg. An optimum procedure for inserting a peripheral venous catheter was defined based on three evidence-based guidelines (WHO, CDC, RKI) including five steps with 1A or 1B level of evidence: hand disinfection before patient contact, skin antisepsis of the puncture site, no palpation of treated puncture site, hand disinfection before aseptic procedure, and sterile dressing on the puncture site. A research nurse observed and recorded procedures for peripheral venous catheter insertion for healthcare workers in four different departments (endoscopy, central emergency admissions, pediatrics, and dermatology). A multimodal intervention with 5 elements was established (teaching session, dummy training, e-learning tool, tablet and poster, and direct feedback), followed by a second observation period. During the last observation week, participants evaluated the intervention. Results: In the control period, 207 insertions were observed, and 202 in the intervention period. Compliance improved significantly for four of five steps (e.g., from 11.6% to 57.9% for hand disinfection before patient contact; p<0.001, chi-square test). Compliance with skin antisepsis of the puncture site was high before and after intervention (99.5% before and 99.0% after). Performance of specific steps in the correct order also improved (e.g., from 7.7% to 68.6% when three of five steps were done; p<0.001). The intervention was described as helpful by 46.8% of the participants, as neutral by 46.8%, and as disruptive by 6.4%. Conclusions: A multimodal strategy to improve both compliance with safety steps for peripheral venous catheter insertion and performance of an optimum procedure was effective and was regarded helpful by healthcare workers. PMID:24327944

  13. Otoplasty: sequencing the operation for improved results.

    PubMed

    Hoehn, James G; Ashruf, Salman

    2005-01-01

    : After studying this article, the participant should be able to: 1. Understand the anatomy and embryology of the external ear. 2. Understand the anatomic causes of the prominent ear. 3. Understand the operative maneuvers used to shape the external ear. 4. Be able to sequence the otoplasty for consistent results. 5. Understand the possible complications of the otoplasty procedure. Correction of prominent ears is a common plastic surgical procedure. Proper execution of the surgical techniques is dependent on the surgeon's understanding of the surgical procedure. This understanding is best founded on an understanding of the historical bases for the operative steps and the execution of these operative steps in a logical fashion. This article describes the concept of sequencing the operation of otoplasty to produce predictable results combining the technical contributions from many authors. The historical, embryological, and anatomic bases for the operation are also discussed. Finally, the authors' preferred techniques are presented. Sequencing the steps in the preoperative assessment, preoperative planning, patient management, operative technique, and postoperative care will produce reproducible results for the attentive surgeon. Careful attention to the details of the operation of otoplasty will avoid many postoperative problems.

  14. Effectiveness of a five-step method for teaching clinical skills to students in a dental college in India.

    PubMed

    Virdi, Mandeep S; Sood, Meenakshi

    2011-11-01

    This study conducted at the PDM Dental College and Research Institute, Haryana, India, had the purpose of developing a teaching method based upon a five-step method for teaching clinical skills to students proposed by the American College of Surgeons. This five-step teaching method was used to place fissure sealants as an initial procedure by dental students in clinics. The sealant retention was used as an objective evaluation of the skill learnt by the students. The sealant retention was 92 percent at six- and twelve-month evaluations and 90 percent at the eighteen-month evaluation. These results indicate that simple methods can be devised for teaching clinical skills and achieve high success rates in clinical procedures requiring multiple steps.

  15. The developmental processes for NANDA International Nursing Diagnoses.

    PubMed

    Scroggins, Leann M

    2008-01-01

    This study aims to provide a step-by-step procedural guideline for the development of a nursing diagnosis that meets the necessary criteria for inclusion in the NANDA International and NNN classification systems. The guideline is based on the processes developed by the Diagnosis Development Committee of NANDA International and includes the necessary processes for development of Actual, Wellness, Health Promotion, and Risk nursing diagnoses. Definitions of Actual, Wellness, Health Promotion, and Risk nursing diagnoses along with inclusion criteria and taxonomy rules have been incorporated into the guideline to streamline the development and review processes for submitted diagnoses. A step-by-step procedural guideline will assist the submitter to move efficiently and effectively through the submission process, resulting in increased submissions and enhancement of the NANDA International and NNN classification systems.

  16. Estimating Slope and Level Change in N = 1 Designs

    ERIC Educational Resources Information Center

    Solanas, Antonio; Manolov, Rumen; Onghena, Patrick

    2010-01-01

    The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series before assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and…

  17. Perception of School Safety of a Local School

    ERIC Educational Resources Information Center

    Massey-Jones, Darla

    2013-01-01

    This qualitative case study investigated the perception of school safety, what current policies and procedures were effective, and what policies and procedures should be implemented. Data were collected in two steps, by survey and focus group interview. Analysis determined codes that revealed several themes relevant to the perception of school…

  18. A flow-based synthesis of imatinib: the API of Gleevec.

    PubMed

    Hopkin, Mark D; Baxendale, Ian R; Ley, Steven V

    2010-04-14

    A concise, flow-based synthesis of Imatinib, a compound used for the treatment of chronic myeloid leukaemia, is described whereby all steps are conducted in tubular flow coils or cartridges packed with reagents or scavengers to effect clean product formation. An in-line solvent switching procedure was developed enabling the procedure to be performed with limited manual handling of intermediates.

  19. 14 CFR 302.37 - Waiver of procedural steps after hearing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Waiver of procedural steps after hearing... Applicability Oral Evidentiary Hearing Proceedings § 302.37 Waiver of procedural steps after hearing. The parties to any proceeding may agree to waive any one or more of the procedural steps provided in § 302.29...

  20. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations

    PubMed Central

    Calfee, M. Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-01-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration. PMID:27362274

  1. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations.

    PubMed

    Calfee, M Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-12-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration.

  2. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases

    PubMed Central

    2012-01-01

    Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples. PMID:23050842

  3. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases.

    PubMed

    Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria

    2012-10-10

    Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples.

  4. Determination of novel brominated flame retardants and polybrominated diphenyl ethers in serum using gas chromatography-mass spectrometry with two simplified sample preparation procedures.

    PubMed

    Gao, Le; Li, Jian; Wu, Yandan; Yu, Miaohao; Chen, Tian; Shi, Zhixiong; Zhou, Xianqing; Sun, Zhiwei

    2016-11-01

    Two simple and efficient pretreatment procedures have been developed for the simultaneous extraction and cleanup of six novel brominated flame retardants (NBFRs) and eight common polybrominated diphenyl ethers (PBDEs) in human serum. The first sample pretreatment procedure was a quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based approach. An acetone/hexane mixture was employed to isolate the lipid and analytes from the serum with a combination of MgSO 4 and NaCl, followed by a dispersive solid-phase extraction (d-SPE) step using C18 particles as a sorbent. The second sample pretreatment procedure was based on solid-phase extraction. The sample extraction and cleanup were conducted directly on an Oasis HLB SPE column using 5 % aqueous isopropanol, concentrated sulfuric acid, and 10 % aqueous methanol, followed by elution with dichloromethane. The NBFRs and PBDEs were then detected using gas chromatography-negative chemical ionization mass spectrometry (GC-NCI MS). The methods were assessed for repeatability, accuracy, selectivity, limits of detection (LODs), and linearity. The results of spike recovery experiments in fetal bovine serum showed that average recoveries ranged from 77.9 % to 128.8 % with relative standard deviations (RSDs) from 0.73 % to 12.37 % for most of the analytes. The LODs for the analytes in fetal bovine serum ranged from 0.3 to 50.8 pg/mL except for decabromodiphenyl ethane. The proposed method was successfully applied to the determination of the 14 brominated flame retardants in human serum. The two pretreatment procedures described here are simple, accurate, and precise, and are suitable for the routine analysis of human serum. Graphical Abstract Workflow of a QuEChERS-based approach (top) and an SPE-based approach (bottom) for the detection of PBDEs and NBFRs in serum.

  5. One Step Forward--Half a Step Back: A Status Report on Bias-Based Bullying of Asian American Students in New York City Schools

    ERIC Educational Resources Information Center

    Asian American Legal Defense and Education Fund, 2013

    2013-01-01

    In September 2008, Mayor Michael Bloomberg and former Schools Chancellor, Joel Klein announced Chancellor's Regulation A-832, which established policies and procedures on how New York City schools should respond to bias-based harassment, intimidation, and bullying in schools. The Asian American Legal Defense and Education Fund (AALDEF), the Sikh…

  6. Speak Out (K-8) [and] Election '80.

    ERIC Educational Resources Information Center

    Illinois State Board of Education, Springfield.

    These two teaching guides contain step-by-step procedures for an election education program in which all Illinois school children vote for and elect a State animal. The program, mandated by the Illinois State Legislature, is intended to provide students with the unique opportunity to learn about the entire election process through actual voting…

  7. RBS Career Education. Evaluation Planning Manual. Education Is Going to Work.

    ERIC Educational Resources Information Center

    Kershner, Keith M.

    Designed for use with the Research for Better Schools career education program, this evaluation planning manual focuses on procedures and issues central to planning the evaluation of an educational program. Following a statement on the need for evaluation, nine sequential steps for evaluation planning are discussed. The first two steps, program…

  8. Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.

    PubMed

    Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A

    2017-01-01

    Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.

  9. Optical pattern recognition algorithms on neural-logic equivalent models and demonstration of their prospects and possible implementations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.

    2001-03-01

    Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).

  10. Aboriginal and Torres Strait Islander community governance of health research: Turning principles into practice.

    PubMed

    Gwynn, Josephine; Lock, Mark; Turner, Nicole; Dennison, Ray; Coleman, Clare; Kelly, Brian; Wiggers, John

    2015-08-01

    Gaps exist in researchers' understanding of the 'practice' of community governance in relation to research with Aboriginal and Torres Strait Islander peoples. We examine Aboriginal community governance of two rural NSW research projects by applying principles-based criteria from two independent sources. One research project possessed a strong Aboriginal community governance structure and evaluated a 2-year healthy lifestyle program for children; the other was a 5-year cohort study examining factors influencing the mental health and well-being of participants. The National Health and Medical Research Council of Australia's 'Values and ethics: guidelines for ethical conduct in Aboriginal and Torres Strait Islander research' and 'Ten principles relevant to health research among Indigenous Australian populations' described by experts in the field. Adopt community-based participatory research constructs. Develop clear governance structures and procedures at the beginning of the study and allow sufficient time for their establishment. Capacity-building must be a key component of the research. Ensure sufficient resources to enable community engagement, conduct of research governance procedures, capacity-building and results dissemination. The implementation of governance structures and procedures ensures research addresses the priorities of the participating Aboriginal and Torres Strait Islander communities, minimises risks and improves outcomes for the communities. Principles-based Aboriginal and Torres Strait Islander community governance of research is very achievable. Next steps include developing a comprehensive evidence base for appropriate governance structures and procedures, and consolidating a suite of practical guides for structuring clear governance in health research. © 2015 National Rural Health Alliance Inc.

  11. SU-F-T-250: What Does It Take to Correctly Assess the High Failure Modes of an Advanced Radiotherapy Procedure Such as Stereotactic Body Radiation Therapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, D; Vile, D; Rosu, M

    Purpose: Assess the correct implementation of risk-based methodology of TG 100 to optimize quality management and patient safety procedures for Stereotactic Body Radiation Therapy. Methods: A detailed process map of SBRT treatment procedure was generated by a team of three physicists with varying clinical experience at our institution to assess the potential high-risk failure modes. The probabilities of occurrence (O), severity (S) and detectability (D) for potential failure mode in each step of the process map were assigned by these individuals independently on the scale from1 to 10. The risk priority numbers (RPN) were computed and analyzed. The highest 30more » potential modes from each physicist’s analysis were then compared. Results: The RPN values assessed by the three physicists ranged from 30 to 300. The magnitudes of the RPN values from each physicist were different, and there was no concordance in the highest RPN values recorded by three physicists independently. The 10 highest RPN values belonged to sub steps of CT simulation, contouring and delivery in the SBRT process map. For these 10 highest RPN values, at least two physicists, irrespective of their length of experience had concordance but no general conclusions emerged. Conclusion: This study clearly shows that the risk-based assessment of a clinical process map requires great deal of preparation, group discussions, and participation by all stakeholders. One group albeit physicists cannot effectively implement risk-based methodology proposed by TG100. It should be a team effort in which the physicists can certainly play the leading role. This also corroborates TG100 recommendation that risk-based assessment of clinical processes is a multidisciplinary team effort.« less

  12. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip.

    PubMed

    Davison, James A

    2015-01-01

    To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.

  13. Production and Purification of Recombinant Filamentous Bacteriophages Displaying Immunogenic Heterologous Epitopes.

    PubMed

    Deng, Lei; Linero, Florencia; Saelens, Xavier

    2016-01-01

    Viruslike particles often combine high physical stability with robust immunogenicity. Furthermore, when such particles are based on bacteriophages, they can be produced in high amounts at minimal cost and typically will require only standard biologically contained facilities. We provide protocols for the characterization and purification of recombinant viruslike particles derived from filamentous bacteriophages. As an example, we focus on filamentous Escherichia coli fd phage displaying a conserved influenza A virus epitope that is fused genetically to the N-terminus of the major coat protein of this phage. A step-by-step procedure to obtain a high-titer, pure recombinant phage preparation is provided. We also describe a quality control experiment based on a biological readout of the purified fd phage preparation. These protocols together with the highlighted critical steps may facilitate generic implementation of the provided procedures for the display of other epitopes by recombinant fd phages.

  14. Max-AUC Feature Selection in Computer-Aided Detection of Polyps in CT Colonography

    PubMed Central

    Xu, Jian-Wu; Suzuki, Kenji

    2014-01-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level. PMID:24608058

  15. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    PubMed

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  16. A novel two-step procedure to expand Sca-1+ cells clonally

    PubMed Central

    Tang, Yao Liang; Shen, Leping; Qian, Keping; Phillips, M. Ian

    2007-01-01

    Resident cardiac stem cells (CSCs) are characterized by their capacity to self-renew in culture, and are multi-potent for forming normal cell types in hearts. CSCs were originally isolated directly from enzymatically digested hearts using stem cell markers. However, long exposure to enzymatic digestion can affect the integrity of stem cell markers on the cell surface, and also compromise stem cell function. Alternatively resident CSCs can migrate from tissue explant and form cardiospheres in culture. However, fibroblast contamination can easily occur during CSC culture. To avoid these problems, we developed a two-step procedure by growing the cells before selecting the Sca1+ cells and culturing in cardiac fibroblast conditioned medium, they avoid fibroblast overgrowth. PMID:17577582

  17. An index-based robust decision making framework for watershed management in a changing climate.

    PubMed

    Kim, Yeonjoo; Chung, Eun-Sung

    2014-03-01

    This study developed an index-based robust decision making framework for watershed management dealing with water quantity and quality issues in a changing climate. It consists of two parts of management alternative development and analysis. The first part for alternative development consists of six steps: 1) to understand the watershed components and process using HSPF model, 2) to identify the spatial vulnerability ranking using two indices: potential streamflow depletion (PSD) and potential water quality deterioration (PWQD), 3) to quantify the residents' preferences on water management demands and calculate the watershed evaluation index which is the weighted combinations of PSD and PWQD, 4) to set the quantitative targets for water quantity and quality, 5) to develop a list of feasible alternatives and 6) to eliminate the unacceptable alternatives. The second part for alternative analysis has three steps: 7) to analyze all selected alternatives with a hydrologic simulation model considering various climate change scenarios, 8) to quantify the alternative evaluation index including social and hydrologic criteria with utilizing multi-criteria decision analysis methods and 9) to prioritize all options based on a minimax regret strategy for robust decision. This framework considers the uncertainty inherent in climate models and climate change scenarios with utilizing the minimax regret strategy, a decision making strategy under deep uncertainty and thus this procedure derives the robust prioritization based on the multiple utilities of alternatives from various scenarios. In this study, the proposed procedure was applied to the Korean urban watershed, which has suffered from streamflow depletion and water quality deterioration. Our application shows that the framework provides a useful watershed management tool for incorporating quantitative and qualitative information into the evaluation of various policies with regard to water resource planning and management. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. [Targeted methods for measuring patient satisfaction in a radiological center].

    PubMed

    Maurer, M H; Stein, E; Schreiter, N F; Renz, D M; Poellinger, A

    2010-11-01

    To investigate two event-oriented methods for evaluating patient satisfaction with radiological services like outpatient computed tomography (CT) examinations. 159 patients (55% men, 45% women) were asked to complete a questionnaire to provide information about their satisfaction with their examination. At first, patients were asked to spontaneously recall notably positive and negative aspects (so-called "critical incidents", critical incident technique = CIT) of the examination. Subsequently a flow chart containing all single steps of the examination procedure was shown to all patients. They were asked to point out the positive and negative aspects they perceived at each step (so-called sequential incident technique = SIT). The CIT-based part of the questionnaire yielded 356 comments (183 positive and 173 negative), which were assigned to one of four categories: interaction of staff with patient, procedure and organization, CT examination, and overall setting of the examination. Significantly more detailed comments regarding individual aspects of the CT examination were elicited in the second part of the survey, which was based on the SIT. There were 1413 statements with a significantly higher number of positive comments (n = 939, 66%) versus negative comments (n = 474, 34%; p < 0.001). The critical and sequential incident techniques are suitable to measure the subjective satisfaction with the delivery of radiological services such as CT examinations. Positive comments confirm the adequacy of the existing procedures, while negative comments provide direct information about how service quality can be improved. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Tetraethylene glycol promoted two-step, one-pot rapid synthesis of indole-3-[1- 11C]acetic acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sojeong; Qu, Wenchao; Alexoff, David L.

    2014-12-12

    An operationally friendly, two-step, one-pot process has been developed for the rapid synthesis of carbon-11 labeled indole-3-acetic acid ([ 11]IAA or [ 11]auxin). By replacing an aprotic polar solvent with tetraethylene glycol, nucleophilic [ 11]cyanation and alkaline hydrolysis reactions were performed consecutively in a single pot without a time-consuming intermediate purification step. The entire production time for this updated procedure is 55 min, which dramatically simplifies the entire synthesis and reduces the starting radioactivity required for a whole plant imaging study.

  20. Z-DOC: a serious game for Z-plasty procedure training.

    PubMed

    Shewaga, Robert; Knox, Aaron; Ng, Gary; Kapralos, Bill; Dubrowski, Adam

    2013-01-01

    We present Z-DOC, a (prototype) serious game for training plastic surgery residents the steps comprising the Z-plasty surgical procedure. Z-DOC employs touch-based interactions and promotes competition amongst multiple players/users thus promote engagement and motivation. It is hypothesized that by learning the Z-plasty procedure in an interactive, engaging, and fun gaming environment, trainees will have a much better understanding of the procedure than by traditional learning modalities.

  1. Effect of Saliva on the Tensile Bond Strength of Different Generation Adhesive Systems: An In-Vitro Study.

    PubMed

    Gupta, Nimisha; Tripathi, Abhay Mani; Saha, Sonali; Dhinsa, Kavita; Garg, Aarti

    2015-07-01

    Newer development of bonding agents have gained a better understanding of factors affecting adhesion of interface between composite and dentin surface to improve longevity of restorations. The present study evaluated the influence of salivary contamination on the tensile bond strength of different generation adhesive systems (two-step etch-and-rinse, two-step self-etch and one-step self-etch) during different bonding stages to dentin where isolation is not maintained. Superficial dentin surfaces of 90 extracted human molars were randomly divided into three study Groups (Group A: Two-step etch-and-rinse adhesive system; Group B: Two-step self-etch adhesive system and Group C: One-step self-etch adhesive system) according to the different generation of adhesives used. According to treatment conditions in different bonding steps, each Group was further divided into three Subgroups containing ten teeth in each. After adhesive application, resin composite blocks were built on dentin and light cured subsequently. The teeth were then stored in water for 24 hours before sending for testing of tensile bond strength by Universal Testing Machine. The collected data were then statistically analysed using one-way ANOVA and Tukey HSD test. One-step self-etch adhesive system revealed maximum mean tensile bond strength followed in descending order by Two-step self-etch adhesive system and Two-step etch-and-rinse adhesive system both in uncontaminated and saliva contaminated conditions respectively. Unlike One-step self-etch adhesive system, saliva contamination could reduce tensile bond strength of the two-step self-etch and two-step etch-and-rinse adhesive system. Furthermore, the step of bonding procedures and the type of adhesive seems to be effective on the bond strength of adhesives contaminated with saliva.

  2. Solving satisfiability problems using a novel microarray-based DNA computer.

    PubMed

    Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning

    2007-01-01

    An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.

  3. Development of coring procedures applied to Si, CdTe, and CIGS solar panels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moutinho, H. R.; Johnston, S.; To, B.

    Most of the research on the performance and degradation of photovoltaic modules is based on macroscale measurements of device parameters such as efficiency, fill factor, open-circuit voltage, and short-circuit current. Our goal is to develop the capabilities to allow us to study the degradation of these parameters in the micro- and nanometer scale and to relate our results to performance parameters. To achieve this objective, the first step is to be able to access small samples from specific areas of the solar panels without changing the properties of the material. In this paper, we describe two coring procedures that wemore » developed and applied to Si, CIGS, and CdTe solar panels. In the first procedure, we cored full samples, whereas in the second we performed a partial coring that keeps the tempered glass intact. The cored samples were analyzed by different analytical techniques before and after coring, at the same locations, and no damage during the coring procedure was observed.« less

  4. Development of coring procedures applied to Si, CdTe, and CIGS solar panels

    DOE PAGES

    Moutinho, H. R.; Johnston, S.; To, B.; ...

    2018-01-04

    Most of the research on the performance and degradation of photovoltaic modules is based on macroscale measurements of device parameters such as efficiency, fill factor, open-circuit voltage, and short-circuit current. Our goal is to develop the capabilities to allow us to study the degradation of these parameters in the micro- and nanometer scale and to relate our results to performance parameters. To achieve this objective, the first step is to be able to access small samples from specific areas of the solar panels without changing the properties of the material. In this paper, we describe two coring procedures that wemore » developed and applied to Si, CIGS, and CdTe solar panels. In the first procedure, we cored full samples, whereas in the second we performed a partial coring that keeps the tempered glass intact. The cored samples were analyzed by different analytical techniques before and after coring, at the same locations, and no damage during the coring procedure was observed.« less

  5. NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs.

    PubMed

    Morisset, Dany; Dobnik, David; Hamels, Sandrine; Zel, Jana; Gruden, Kristina

    2008-10-01

    We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1-25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification.

  6. NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs

    PubMed Central

    Morisset, Dany; Dobnik, David; Hamels, Sandrine; Žel, Jana; Gruden, Kristina

    2008-01-01

    We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1–25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification. PMID:18710880

  7. A triangular thin shell finite element: Nonlinear analysis. [structural analysis

    NASA Technical Reports Server (NTRS)

    Thomas, G. R.; Gallagher, R. H.

    1975-01-01

    Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.

  8. Oranges, Posters, Ribbons, and Lemonade: Concrete Computational Strategies for Dividing Fractions

    ERIC Educational Resources Information Center

    Kribs-Zaleta, Christopher M.

    2008-01-01

    This article describes how sixth-grade students developed concrete models to solve division of fractions story problems. Students developed separate two-step procedures to solve measurement and partitive problems, drawing on invented procedures for division of whole numbers. Errors also tended to be specific to the type of division problem…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Chris, E-mail: cyuan@uwm.edu; Wang, Endong; Zhai, Qiang

    Temporal homogeneity of inventory data is one of the major problems in life cycle assessment (LCA). Addressing temporal homogeneity of life cycle inventory data is important in reducing the uncertainties and improving the reliability of LCA results. This paper attempts to present a critical review and discussion on the fundamental issues of temporal homogeneity in conventional LCA and propose a theoretical framework for temporal discounting in LCA. Theoretical perspectives for temporal discounting in life cycle inventory analysis are discussed first based on the key elements of a scientific mechanism for temporal discounting. Then generic procedures for performing temporal discounting inmore » LCA is derived and proposed based on the nature of the LCA method and the identified key elements of a scientific temporal discounting method. A five-step framework is proposed and reported in details based on the technical methods and procedures needed to perform a temporal discounting in life cycle inventory analysis. Challenges and possible solutions are also identified and discussed for the technical procedure and scientific accomplishment of each step within the framework. - Highlights: • A critical review for temporal homogeneity problem of life cycle inventory data • A theoretical framework for performing temporal discounting on inventory data • Methods provided to accomplish each step of the temporal discounting framework.« less

  10. Phytochemistry of cimicifugic acids and associated bases in Cimicifuga racemosa root extracts.

    PubMed

    Gödecke, Tanja; Nikolic, Dejan; Lankin, David C; Chen, Shao-Nong; Powell, Sharla L; Dietz, Birgit; Bolton, Judy L; van Breemen, Richard B; Farnsworth, Norman R; Pauli, Guido F

    2009-01-01

    Earlier studies reported serotonergic activity for cimicifugic acids (CA) isolated from Cimicifuga racemosa. The discovery of strongly basic alkaloids, cimipronidines, from the active extract partition and evaluation of previously employed work-up procedures has led to the hypothesis of strong acid/base association in the extract. Re-isolation of the CAs was desired to permit further detailed studies. Based on the acid/base association hypothesis, a new separation scheme of the active partition was required, which separates acids from associated bases. A new 5-HT(7) bioassay guided work-up procedure was developed that concentrates activity into one partition. The latter was subjected to a new two-step centrifugal partitioning chromatography (CPC) method, which applies pH zone refinement gradient (pHZR CPC) to dissociate the acid/base complexes. The resulting CA fraction was subjected to a second CPC step. Fractions and compounds were monitored by (1)H NMR using a structure-based spin-pattern analysis facilitating dereplication of the known acids. Bioassay results were obtained for the pHZR CPC fractions and for purified CAs. A new CA was characterised. While none of the pure CAs was active, the serotonergic activity was concentrated in a single pHZR CPC fraction, which was subsequently shown to contain low levels of the potent 5-HT(7) ligand, N(omega)-methylserotonin. This study shows that CAs are not responsible for serotonergic activity in black cohosh. New phytochemical methodology (pHZR CPC) and a sensitive dereplication method (LC-MS) led to the identification of N(omega)-methylserotonin as serotonergic active principle. Copyright (c) 2009 John Wiley & Sons, Ltd.

  11. Checkout and Standard Use Procedures for the Mark III Space Suit Assembly

    NASA Technical Reports Server (NTRS)

    Valish, Dana J.

    2012-01-01

    The operational pressure range is the range to which the suit can be nominally operated for manned testing. The top end of the nominal operational pressure range is equivalent to 1/2 the proof pressure. Structural pressure is 1.5 times the specified test pressure for any given test. Proof pressure is the maximum unmanned pressure to which the suit was tested by the vendor prior to delivery. The maximum allowable working pressure (MAWP) is 90% of the proof pressure. The pressure systems RVs are set to keep components below their MAWPs. If the suit is pressurized over its MAWP, the suit will be taken out of service and an in-depth inspection/review of the suit will be performed before the suit is put back in service. The procedures outlined in this document should be followed as written. However, the suit test engineer (STE) may make redline changes real-time, provided those changes are recorded in the anomaly section of the test data sheet. If technicians supporting suit build-up, check-out, and/or test execution believe that a procedure can be improved, they should notify their lead. If procedures are incorrect to the point of potentially causing hardware damage or affecting safety, bring the problem to the technician lead and/or STE s attention and stop work until a solution (temporary or permanent) is authorized. Certain steps in the procedure are marked with a DV , for Designated Verifier. The Designated Verifier for this procedure is an Advanced Space Suit Technology Development Laboratory technician, not directly involved in performing the procedural steps, who will verify that the step was performed as stated. The steps to be verified by the DV were selected based on one or more of the following criteria: the step was deemed significant in ensuring the safe performance of the test, the data recorded in the step is of specific interest in monitoring the suit system operation, or the step has a strong influence on the successful completion of test objectives. Prior to all manned test activities, Advanced Suit Test Data Sheet (TDS) Parts A-E shall be completed to verify system and team are ready for test. Advanced Suit TDS Parts F-G shall be completed at the end of the suited activity. Appendix B identifies tha appropriate Mark III suit emergency event procedures.

  12. Formal analysis and automatic generation of user interfaces: approach, methodology, and an algorithm.

    PubMed

    Heymann, Michael; Degani, Asaf

    2007-04-01

    We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.

  13. The Mini-OAKHQOL for knee and hip osteoarthritis quality of life was obtained following recent shortening guidelines.

    PubMed

    Guillemin, Francis; Rat, Anne-Christine; Goetz, Christophe; Spitz, Elisabeth; Pouchot, Jacques; Coste, Joël

    2016-01-01

    To develop a short form of the knee and hip osteoarthritis quality of life questionnaire, the Mini-OAKHQOL, preserving the conceptual model and, as far as possible, the content and the psychometric properties of the original instrument. A two-step shortening procedure was used: (1) a consensus Delphi method, with a panel of patients and another of professionals independently asked to select items and (2) a nominal group, where patients, professionals, and methodologists reached consensus on the final selection of items, using information from the panels and from modern measurement and classical test theory analyses. The psychometric properties of the Mini-OAKHQOL were assessed in an independent population-based sample of 581 subjects with knee or hip osteoarthritis. The two-step shortening procedure resulted in a 20-item questionnaire. Confirmatory factor analysis showed preservation of the original five-dimensional structure. Rasch analyses showed the unidimensionality and invariance by sex, age, and joint of the main dimensions. Convergent validity, reproducibility, and internal consistency were similar to or better than those of the original OAKHQOL. The 20-item Mini-OAKHQOL has good psychometric properties and can be used for the measurement of quality of life in subjects with osteoarthritis of the lower limbs. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. GLOBAL PROPERTIES OF FULLY CONVECTIVE ACCRETION DISKS FROM LOCAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodo, G.; Ponzo, F.; Rossi, P.

    2015-08-01

    We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction ismore » analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.« less

  15. A covalent modification for graphene by adamantane groups through two-step chlorination-Grignard reactions

    NASA Astrophysics Data System (ADS)

    Sun, Xuzhuo; Li, Bo; Lu, Mingxia

    2017-07-01

    Chemical modification of graphene is a promising approach to manipulate its properties for its end applications. Herein we designed a two-step route through chlorination-Grignard reactions to covalently decorate the surface of graphene with adamantane groups. The chemically modified graphene was characterized by Raman spectroscopy, atomic force microscopy, and X-ray photoelectron spectroscopy. Chlorination of graphene occurred rapidly, and the substitution of chlorine atoms on chlorinated graphene by adamantane Grignard reagent afforded adamantane graphene in almost quantitative yield. Adamantane groups were found to be covalently bonded to the graphene carbons. The present two-step procedure may provide an effective and facile route for graphene modification with varieties of organic functional groups.

  16. The Synthesis of 2-acetyl-1,4-naphthoquinone: A Multi-step Synthesis.

    ERIC Educational Resources Information Center

    Green, Ivan R.

    1982-01-01

    Outlines 2 procedures for synthesizing 2-acetyl-1,4-naphthoquinone to compare relative merits of the two pathways. The major objective of the exercise is to demonstrate that certain factors should be considered when selecting a pathway for synthesis including availability of starting materials, cost of reagents, number of steps involved,…

  17. Woodrow Wilson and the U.S. Ratification of the Treaty of Versailles. Lesson Plan.

    ERIC Educational Resources Information Center

    Pyne, John; Sesso, Gloria

    1995-01-01

    Presents a high school lesson plan on the struggle over ratification of the Treaty of Versailles and U.S. participation in the League of Nations. Includes a timeline of events, four primary source documents, and biographical portraits of two opposing senators. Provides student objectives and step-by-step instructional procedures. (CFR)

  18. Experimental Investigation of Air-Cooled Turbine Blades in Turbojet Engine. 7: Rotor-Blade Fabrication Procedures

    NASA Technical Reports Server (NTRS)

    Long, Roger A.; Esgar, Jack B.

    1951-01-01

    An experimental investigation was conducted to determine the cooling effectiveness of a wide variety of air-cooled turbine-blade configurations. The blades, which were tested in the turbine of a - commercial turbojet engine that was modified for this investigation by replacing two of the original blades with air-cooled blades located diametrically opposite each other, are untwisted, have no aerodynamic taper, and have essentially the same external profile. The cooling-passage configuration is different for each blade, however. The fabrication procedures were varied and often unique. The blades were fabricated using methods most suitable for obtaining a small number of blades for use in the cooling investigations and therefore not all the fabrication procedures would be directly applicable to production processes, although some of the ideas and steps might be useful. Blade shells were obtained by both casting and forming. The cast shells were either welded to the blade base or cast integrally with the base. The formed shells were attached to the base by a brazing and two welding methods. Additional surface area was supplied in the coolant passages by the addition of fins or tubes that were S-brazed. to the shell. A number of blades with special leading- and trailing-edge designs that provided added cooling to these areas were fabricated. The cooling effectiveness and purposes of the various blade configurations are discussed briefly.

  19. Design of a Compact Quad-Channel Diplexer

    NASA Astrophysics Data System (ADS)

    Xu, Jin

    2016-01-01

    This paper presents a compact quad-channel diplexer by using two asymmetrical coupling shorted stub loaded stepped-impedance (SSLSIR) dual-band bandpass filters (DB-BPFs) to replace two single-band BPFs in a traditional BPF-based diplexer. Part of its impedance matching circuit is implemented by using a three-element lowpass T-network to acquire the desired phase shift. Detailed design procedures are given to guide the diplexer design. The fabricated quad-channel diplexer occupies a compact circuit area of 0.168λg×0.136λg. High band-to-band isolation and wide stopband performance are achieved. Good agreement is shown between the simulated and measured results.

  20. The Multigrid-Mask Numerical Method for Solution of Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Ku, Hwar-Ching; Popel, Aleksander S.

    1996-01-01

    A multigrid-mask method for solution of incompressible Navier-Stokes equations in primitive variable form has been developed. The main objective is to apply this method in conjunction with the pseudospectral element method solving flow past multiple objects. There are two key steps involved in calculating flow past multiple objects. The first step utilizes only Cartesian grid points. This homogeneous or mask method step permits flow into the interior rectangular elements contained in objects, but with the restriction that the velocity for those Cartesian elements within and on the surface of an object should be small or zero. This step easily produces an approximate flow field on Cartesian grid points covering the entire flow field. The second or heterogeneous step corrects the approximate flow field to account for the actual shape of the objects by solving the flow field based on the local coordinates surrounding each object and adapted to it. The noise occurring in data communication between the global (low frequency) coordinates and the local (high frequency) coordinates is eliminated by the multigrid method when the Schwarz Alternating Procedure (SAP) is implemented. Two dimensional flow past circular and elliptic cylinders will be presented to demonstrate the versatility of the proposed method. An interesting phenomenon is found that when the second elliptic cylinder is placed in the wake of the first elliptic cylinder a traction force results in a negative drag coefficient.

  1. SUPPORTING THE INDUSTRY BY DEVELOPING A DESIGN GUIDANCE FOR COMPUTER-BASED PROCEDURES FOR FIELD WORKERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; LeBlanc, Katya

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human interacts with the procedures, which can be achieved through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools and dynamic step presentation. As a step toward the goal of improving procedure use performance, the U.S. Department of Energy Light Water Reactor Sustainability Programmore » researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with CBPs. The main purpose of the CBP research conducted at the Idaho National Laboratory was to provide design guidance to the nuclear industry to be used by both utilities and vendors. After studying existing design guidance for CBP systems, the researchers concluded that the majority of the existing guidance is intended for control room CBP systems, and does not necessarily address the challenges of designing CBP systems for instructions carried out in the field. Further, the guidance is often presented on a high level, which leaves the designer to interpret what is meant by the guidance and how to specifically implement it. The authors developed a design guidance to provide guidance specifically tailored to instructions that are carried out in the field based.« less

  2. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  3. Stepwise detection of recombination breakpoints in sequence alignments.

    PubMed

    Graham, Jinko; McNeney, Brad; Seillier-Moiseiwitsch, Françoise

    2005-03-01

    We propose a stepwise approach to identify recombination breakpoints in a sequence alignment. The approach can be applied to any recombination detection method that uses a permutation test and provides estimates of breakpoints. We illustrate the approach by analyses of a simulated dataset and alignments of real data from HIV-1 and human chromosome 7. The presented simulation results compare the statistical properties of one-step and two-step procedures. More breakpoints are found with a two-step procedure than with a single application of a given method, particularly for higher recombination rates. At higher recombination rates, the additional breakpoints were located at the cost of only a slight increase in the number of falsely declared breakpoints. However, a large proportion of breakpoints still go undetected. A makefile and C source code for phylogenetic profiling and the maximum chi2 method, tested with the gcc compiler on Linux and WindowsXP, are available at http://stat-db.stat.sfu.ca/stepwise/ jgraham@stat.sfu.ca.

  4. Widening Horizons: A Guide to Organizing Field Trips for Adult Students. Final Report.

    ERIC Educational Resources Information Center

    Lutheran Social Mission Society, Philadelphia, PA. Lutheran Settlement House.

    Based on a successful program for women conducted by Lutheran Settlement House in Philadelphia, this guide outlines step-by-step procedures for conducting educational field trips for students in adult basic education programs. The guide offers suggestions for identification of cultural, historical, and social resources that would provide valuable…

  5. Using Multiple-Stimulus without Replacement Preference Assessments to Increase Student Engagement and Performance

    ERIC Educational Resources Information Center

    Weaver, Adam D.; McKevitt, Brian C.; Farris, Allie M.

    2017-01-01

    Multiple-stimulus without replacement preference assessment is a research-based method for identifying appropriate rewards for students with emotional and behavioral disorders. This article presents a brief history of how this technology evolved and describes a step-by-step approach for conducting the procedure. A discussion of necessary materials…

  6. Coarse mesh and one-cell block inversion based diffusion synthetic acceleration

    NASA Astrophysics Data System (ADS)

    Kim, Kang-Seog

    DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.

  7. A review of downscaling procedures - a contribution to the research on climate change impacts at city scale

    NASA Astrophysics Data System (ADS)

    Smid, Marek; Costa, Ana; Pebesma, Edzer; Granell, Carlos; Bhattacharya, Devanjan

    2016-04-01

    Human kind is currently predominantly urban based, and the majority of ever continuing population growth will take place in urban agglomerations. Urban systems are not only major drivers of climate change, but also the impact hot spots. Furthermore, climate change impacts are commonly managed at city scale. Therefore, assessing climate change impacts on urban systems is a very relevant subject of research. Climate and its impacts on all levels (local, meso and global scale) and also the inter-scale dependencies of those processes should be a subject to detail analysis. While global and regional projections of future climate are currently available, local-scale information is lacking. Hence, statistical downscaling methodologies represent a potentially efficient way to help to close this gap. In general, the methodological reviews of downscaling procedures cover the various methods according to their application (e.g. downscaling for the hydrological modelling). Some of the most recent and comprehensive studies, such as the ESSEM COST Action ES1102 (VALUE), use the concept of Perfect Prog and MOS. Other examples of classification schemes of downscaling techniques consider three main categories: linear methods, weather classifications and weather generators. Downscaling and climate modelling represent a multidisciplinary field, where researchers from various backgrounds intersect their efforts, resulting in specific terminology, which may be somewhat confusing. For instance, the Polynomial Regression (also called the Surface Trend Analysis) is a statistical technique. In the context of the spatial interpolation procedures, it is commonly classified as a deterministic technique, and kriging approaches are classified as stochastic. Furthermore, the terms "statistical" and "stochastic" (frequently used as names of sub-classes in downscaling methodological reviews) are not always considered as synonymous, even though both terms could be seen as identical since they are referring to methods handling input modelling factors as variables with certain probability distributions. In addition, the recent development is going towards multi-step methodologies containing deterministic and stochastic components. This evolution leads to the introduction of new terms like hybrid or semi-stochastic approaches, which makes the efforts to systematically classifying downscaling methods to the previously defined categories even more challenging. This work presents a review of statistical downscaling procedures, which classifies the methods in two steps. In the first step, we describe several techniques that produce a single climatic surface based on observations. The methods are classified into two categories using an approximation to the broadest consensual statistical terms: linear and non-linear methods. The second step covers techniques that use simulations to generate alternative surfaces, which correspond to different realizations of the same processes. Those simulations are essential because there is a limited number of real observational data, and such procedures are crucial for modelling extremes. This work emphasises the link between statistical downscaling methods and the research of climate change impacts at city scale.

  8. Two-port robotic hysterectomy: a novel approach.

    PubMed

    Moawad, Gaby N; Tyan, Paul; Khalil, Elias D Abi

    2018-03-24

    The objective of the study was to demonstrate a novel technique for two-port robotic hysterectomy with a particular focus on the challenging portions of the procedure. The study is designed as a technical video, showing step-by-step a two-port robotic hysterectomy approach (Canadian Task Force classification level III). IRB approval was not required for this study. The benefits of minimally invasive surgery for gynecological pathology have been clearly documented in multiple studies. Patients had fewer medical and surgical complications postoperatively, better cosmesis and quality of life. Most gynecological surgeons require 3-5 ports for the standard gynecological procedure. Even though the minimally invasive multiport system provides an excellent safety profile, multiple incisions are associated with a greater risk for morbidity including infection, pain, and hernia. In the past decade, various new methods have emerged to minimize the number of ports used in gynecological surgery. The interventions employed were a two-port robotic hysterectomy, using a camera port plus one robotic arm, with a focus on salpingectomy and cuff closure. We describe a transvaginal and a transabdominal approach for salpingectomy and a novel method for cuff closure. The transvaginal and transabdominal techniques for salpingectomy for two-port robotic-assisted hysterectomy provide excellent tension and exposure for a safe procedure without the need for an extra port. We also describe a transvaginal technique to place the vaginal cuff on tension during closure. With the necessary set of skills on a carefully chosen patient, two-port robotic-assisted total laparoscopic hysterectomy is a feasible procedure.

  9. LMSS communication network design

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The architecture of the telecommunication network as the first step in the design of the LMSS system is described. A set of functional requirements including the total number of users to be served by the LMSS are hypothesized. The design parameters are then defined at length and are systematically selected such that the resultant system is capable of serving the hypothesized number of users. The design of the backhaul link is presented. The number of multiple backhaul beams required for communication to the base stations is determined. A conceptual procedure for call-routing and locating a mobile subscriber within the LMSS network is presented. The various steps in placing a call are explained, and the relationship between the two sets of UHF and S-band multiple beams is developed. A summary of the design parameters is presented.

  10. Multidimensional FEM-FCT schemes for arbitrary time stepping

    NASA Astrophysics Data System (ADS)

    Kuzmin, D.; Möller, M.; Turek, S.

    2003-05-01

    The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.

  11. A three-image algorithm for hard x-ray grating interferometry.

    PubMed

    Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia

    2013-08-12

    A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.

  12. Monitoring the defoliation of hardwood forests in Pennsylvania using LANDSAT. [gypsy moth surveys

    NASA Technical Reports Server (NTRS)

    Dottavio, C. L.; Nelson, R. F.; Williams, D. L. (Principal Investigator)

    1983-01-01

    An automated system for conducting annual gypsy moth defoliation surveys using LANDSAT MSS data and digital processing techniques is described. A two-step preprocessing procedure was developed that uses multitemporal data sets representing forest canopy conditions before and after defoliation to create a digital image in which all nonforest cover types are eliminated or masked out of a LANDSAT image that exhibits insect defoliation. A temporal window for defoliation assessment was identified and a statewide data base was established. A data management system to interface image analysis software with the statewide data base was developed and a cost benefit analysis of this operational system was conducted.

  13. Improved Silica-Guanidiniumthiocyanate DNA Isolation Procedure Based on Selective Binding of Bovine Alpha-Casein to Silica Particles

    PubMed Central

    Boom, René; Sol, Cees; Beld, Marcel; Weel, Jan; Goudsmit, Jaap; Wertheim-van Dillen, Pauline

    1999-01-01

    DNA purified from clinical cerebrospinal fluid and urine specimens by a silica-guanidiniumthiocyanate procedure frequently contained an inhibitor(s) of DNA-processing enzymes which may have been introduced by the purification procedure itself. Inhibition could be relieved by the use of a novel lysis buffer containing alpha-casein. When the novel lysis buffer was used, alpha-casein was bound by the silica particles in the first step of the procedure and eluted together with DNA in the last step, after which it exerted its beneficial effects for DNA-processing enzymes. In the present study we have compared the novel lysis buffer with the previously described lysis buffer with respect to double-stranded DNA yield (which was nearly 100%) and the performance of DNA-processing enzymes. PMID:9986822

  14. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  15. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  16. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  17. Utilizing collagen membranes for guided tissue regeneration-based root coverage.

    PubMed

    Wang, Hom-Lay; Modarressi, Marmar; Fu, Jia-Hui

    2012-06-01

    Gingival recession is a common clinical problem that can result in hypersensitivity, pain, root caries and esthetic concerns. Conventional soft tissue procedures for root coverage require an additional surgical site, thereby causing additional trauma and donor site morbidity. In addition, the grafted tissues heal by repair, with formation of long junctional epithelium with some connective tissue attachment. Guided tissue regeneration-based root coverage was thus developed in an attempt to overcome these limitations while providing comparable clinical results. This paper addresses the biologic foundation of guided tissue regeneration-based root coverage, and describes the indications and contraindications for this technique, as well as the factors that influence outcomes. The step-by-step clinical techniques utilizing collagen membranes are also described. In comparison with conventional soft tissue procedures, the benefits of guided tissue regeneration-based root coverage procedures include new attachment formation, elimination of donor site morbidity, less chair-time, and unlimited availability and uniform thickness of the product. Collagen membranes, in particular, benefit from product biocompatibility with the host, while promoting chemotaxis, hemostasis, and exchange of gas and nutrients. Such characteristics lead to better wound healing by promoting primary wound coverage, angiogenesis, space creation and maintenance, and clot stability. In conclusion, collagen membranes are a reliable alternative for use in root coverage procedures. © 2012 John Wiley & Sons A/S.

  18. Design and implementation of fuzzy logic controllers. Thesis Final Report, 27 Jul. 1992 - 1 Jan. 1993

    NASA Technical Reports Server (NTRS)

    Abihana, Osama A.; Gonzalez, Oscar R.

    1993-01-01

    The main objectives of our research are to present a self-contained overview of fuzzy sets and fuzzy logic, develop a methodology for control system design using fuzzy logic controllers, and to design and implement a fuzzy logic controller for a real system. We first present the fundamental concepts of fuzzy sets and fuzzy logic. Fuzzy sets and basic fuzzy operations are defined. In addition, for control systems, it is important to understand the concepts of linguistic values, term sets, fuzzy rule base, inference methods, and defuzzification methods. Second, we introduce a four-step fuzzy logic control system design procedure. The design procedure is illustrated via four examples, showing the capabilities and robustness of fuzzy logic control systems. This is followed by a tuning procedure that we developed from our design experience. Third, we present two Lyapunov based techniques for stability analysis. Finally, we present our design and implementation of a fuzzy logic controller for a linear actuator to be used to control the direction of the Free Flight Rotorcraft Research Vehicle at LaRC.

  19. Conventional and two step sintering of PZT-PCN ceramics

    NASA Astrophysics Data System (ADS)

    Keshavarzi, Mostafa; Rahmani, Hooman; Nemati, Ali; Hashemi, Mahdieh

    2018-02-01

    In this study, PZT-PCN ceramic was made via sol-gel seeding method and effects of conventional sintering (CS) as well as two-step sintering (TSS) were investigated on microstructure, phase formation, density, dielectric and piezoelectric properties. First, high quality powder was achieved by seeding method in which the mixture of Co3O4 and Nb2O5 powder was added to the prepared PZT sol to form PZT-PCN gel. After drying and calcination, pyrochlore free PZT-PCN powder was synthesized. Second, CS and TSS were applied to achieve dense ceramic. The optimum temperature used for 2 h of conventional sintering was obtained at 1150 °C; finally, undesired ZrO2 phase formed in CS procedure was removed successfully with TSS procedure and dielectric and piezoelectric properties were improved compared to the CS procedure. The best electrical properties obtained for the sample sintered by TSS in the initial temperature of T 1 = 1200 °C and secondary temperature of T 2 = 1000 °C for 12 h.

  20. Critical evaluation of the ability of sequential extraction procedures to quantify discrete forms of selenium in sediments and soils.

    PubMed

    Wright, Michael T; Parker, David R; Amrhein, Christopher

    2003-10-15

    Sequential extraction procedures (SEPs) have been widely used to characterize the mobility, bioavailibility, and potential toxicity of trace elements in soils and sediments. Although oft-criticized, these methods may perform best with redox-labile elements (As, Hg, Se) for which more discrete biogeochemical phases may arise from variations in oxidation number. We critically evaluated two published SEPs for Se for their specificity and precision by applying them to four discrete components in an inert silica matrix: soluble Se(VI) (selenate), Se(IV) (selenite) adsorbed onto goethite, elemental Se, and a metal selenide (FeSe; achavalite). These were extracted both individually and in a mixed model sediment. The more selective of the two procedures was modified to further improve its selectivity (SEP 2M). Both SEP 1 and SEP 2M quantitatively recovered soluble selenate but yielded incomplete recoveries of adsorbed selenite (64% and 81%, respectively). SEP 1 utilizes 0.1 M K2S2O8 to target "organically associated" Se, but this extractant also solubilized most of the elemental (64%) and iron selenide (91%) components of the model sediment. In SEP 2M, the Na2SO3 used in step III is effective in extracting elemental Se but also extracted 17% of the Se from the iron selenide, such that the elemental fraction would be overestimated should both forms coexist. Application of SEP 2M to eight wetland sediments further suggested that the Na2SO3 in step III extracts some organically associated Se, so a NaOH extraction was inserted beforehand to yield a further modification, SEP 2OH. Results using this five-step procedure suggested that the four-step SEP 2M could overestimate elemental Se by as much as 43% due to solubilization of organic Se. Although still imperfect in its selectivity, SEP 20H may be the most suitable procedure for routine, accurate fractionation of Se in soils and sediments. However, the strong oxidant (NaOCl) used in the final step cannot distinguish between refractory organic forms of Se and pyritic Se that might form under sulfur-reducing conditions.

  1. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  2. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.

  3. Formalizing the Austrian Procedure Catalogue: A 4-step methodological analysis approach.

    PubMed

    Neururer, Sabrina Barbara; Lasierra, Nelia; Peiffer, Karl Peter; Fensel, Dieter

    2016-04-01

    Due to the lack of an internationally accepted and adopted standard for coding health interventions, Austria has established its own country-specific procedure classification system - the Austrian Procedure Catalogue (APC). Even though the APC is an elaborate coding standard for medical procedures, it has shortcomings that limit its usability. In order to enhance usability and usefulness, especially for research purposes and e-health applications, we developed an ontologized version of the APC. In this paper we present a novel four-step approach for the ontology engineering process, which enables accurate extraction of relevant concepts for medical ontologies from written text. The proposed approach for formalizing the APC consists of the following four steps: (1) comparative pre-analysis, (2) definition analysis, (3) typological analysis, and (4) ontology implementation. The first step contained a comparison of the APC to other well-established or elaborate health intervention coding systems in order to identify strengths and weaknesses of the APC. In the second step, a list of definitions of medical terminology used in the APC was obtained. This list of definitions was used as input for Step 3, in which we identified the most important concepts to describe medical procedures using the qualitative typological analysis approach. The definition analysis as well as the typological analysis are well-known and effective methods used in social sciences, but not commonly employed in the computer science or ontology engineering domain. Finally, this list of concepts was used in Step 4 to formalize the APC. The pre-analysis highlighted the major shortcomings of the APC, such as the lack of formal definition, leading to implicitly available, but not directly accessible information (hidden data), or the poor procedural type classification. After performing the definition and subsequent typological analyses, we were able to identify the following main characteristics of health interventions: (1) Procedural type, (2) Anatomical site, (3) Medical device, (4) Pathology, (5) Access, (6) Body system, (7) Population, (8) Aim, (9) Discipline, (10) Technique, and (11) Body Function. These main characteristics were taken as input of classes for the formalization of the APC. We were also able to identify relevant relations between classes. The proposed four-step approach for formalizing the APC provides a novel, systematically developed, strong framework to semantically enrich procedure classifications. Although this methodology was designed to address the particularities of the APC, the included methods are based on generic analysis tasks, and therefore can be re-used to provide a systematic representation of other procedure catalogs or classification systems and hence contribute towards a universal alignment of such representations, if desired. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  5. Examination of discontinuities in hourly surface relative humidity in Canada during 1953-2003

    NASA Astrophysics Data System (ADS)

    van Wijngaarden, William A.; Vincent, Lucie A.

    2005-11-01

    Hourly values of relative humidity recorded at 75 stations across Canada were examined. Data were checked for possible discontinuities arising because of changes in procedures and instruments. It was found that the replacement of the psychrometer by the dewcel has produced a decreasing step in relative humidity at a number of stations. The historical records were closely examined to retrieve the dewcel installation date, and a procedure based on regression models was applied to determine if it corresponds to a significant step. Results show that there are more stations experiencing a dewcel step in the winter than in the summer. Examination of the trends also reveals that the step often accentuates the decreasing trends originally observed during winter and spring. However, significant steps taken into account, it appears that the relative humidity still decreased by several percent in the spring during 1953-2003 in western Canada. It seems that the southern and coastal stations are not as much affected by this change of instruments.

  6. Behavior-based aggregation of land categories for temporal change analysis

    NASA Astrophysics Data System (ADS)

    Aldwaik, Safaa Zakaria; Onsted, Jeffrey A.; Pontius, Robert Gilmore, Jr.

    2015-03-01

    Comparison between two time points of the same categorical variable for the same study extent can reveal changes among categories over time, such as transitions among land categories. If many categories exist, then analysis can be difficult to interpret. Category aggregation is the procedure that combines two or more categories to create a single broader category. Aggregation can simplify interpretation, and can also influence the sizes and types of changes. Some classifications have an a priori hierarchy to facilitate aggregation, but an a priori aggregation might make researchers blind to important category dynamics. We created an algorithm to aggregate categories in a sequence of steps based on the categories' behaviors in terms of gross losses and gross gains. The behavior-based algorithm aggregates net gaining categories with net gaining categories and aggregates net losing categories with net losing categories, but never aggregates a net gaining category with a net losing category. The behavior-based algorithm at each step in the sequence maintains net change and maximizes swap change. We present a case study where data from 2001 and 2006 for 64 land categories indicate change on 17% of the study extent. The behavior-based algorithm produces a set of 10 categories that maintains nearly the original amount of change. In contrast, an a priori aggregation produces 10 categories while reducing the change to 9%. We offer a free computer program to perform the behavior-based aggregation.

  7. Prioritizing and synthesizing evidence to improve the health care of girls and women living with female genital mutilation: An overview of the process.

    PubMed

    Stein, Karin; Hindin, Michelle J; Chou, Doris; Say, Lale

    2017-02-01

    Female genital mutilation (FGM) constitutes a harmful traditional practice that can have a profound impact on the health and well-being of girls and women who undergo the procedure. In recent years, due to international migration, healthcare providers worldwide are increasingly confronted with the need to provide adequate health care to this population. Recognizing this situation the WHO recently developed the first evidence-based guidelines on the management of health complications from FGM. To inform the guideline recommendations, an expert-driven, two-step process was conducted. The first step consisted of developing and ranking a list of priority research questions for the evidence retrieval. The second step involved conducting a series of systematic reviews and qualitative data syntheses. In the present paper, we first provide the methodology used in the development and ranking of the research questions (step 1) and then detail the common methodology for each of the systematic reviews and qualitative evidence syntheses (step 2). © 2017 International Federation of Gynecology and Obstetrics. The World Health Organization retains copyright and all other rights in the manuscript of this article as submitted for publication.

  8. A grid generation and flow solution method for the Euler equations on unstructured grids

    NASA Astrophysics Data System (ADS)

    Anderson, W. Kyle

    1994-01-01

    A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a National Advisory Committee for Aeronautics (NACA) 0012 airfoil as well as a two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed.

  9. Study of Core Competency Elements and Factors Affecting Performance Efficiency of Government Teachers in Northeastern Thailand

    ERIC Educational Resources Information Center

    Chansirisira, Pacharawit

    2012-01-01

    The research aimed to investigate the core competency elements and the factors affecting the performance efficiency of the civil service teachers in the northeastern region, Thailand. The research procedure consisted of two steps. In the first step, the data were collected using a questionnaire with the reliability (Cronbach's Alpha) of 0.90. The…

  10. Markov Chains For Testing Redundant Software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  11. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  12. Pylorus preserving loop duodeno-enterostomy with sleeve gastrectomy - preliminary results

    PubMed Central

    2014-01-01

    Background Bariatric operations mostly combine a restrictive gastric component with a rerouting of the intestinal passage. The pylorus can thereby be alternatively preserved or excluded. With the aim of performing a “pylorus-preserving gastric bypass”, we present early results of a proximal postpyloric loop duodeno-jejunostomy associated with a sleeve gastrectomy (LSG) compared to results of a parallel, but distal LSG with a loop duodeno-ileostomy as a two-step procedure. Methods 16 patients underwent either a two-step LSG with a distal loop duodeno-ileostomy (DIOS) as revisional bariatric surgery or a combined single step operation with a proximal duodeno-jejunostomy (DJOS). Total small intestinal length was determined to account for inter-individual differences. Results Mean operative time for the second-step of the DIOS operation was 121 min and 147 min for the combined DJOS operation. The overall intestinal length was 750.8 cm (range 600-900 cm) with a bypassed limb length of 235.7 cm in DJOS patients. The mean length of the common channel in DIOS patients measured 245.6 cm. Overall excess weight loss (%EWL) of the two-step DIOS procedure came to 38.31% and 49.60%, DJOS patients experienced an %EWL of 19.75% and 46.53% at 1 and 6 months, resp. No complication related to the duodeno-enterostomy occurred. Conclusions Loop duodeno-enterosomies with sleeve gastrectomy can be safely performed and may open new alternatives in bariatric surgery with the possibility for inter-individual adaptation. PMID:24725654

  13. Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2016-01-01

    Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.

  14. Qualitative computer aided evaluation of dental impressions in vivo.

    PubMed

    Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H

    2006-01-01

    Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.

  15. Automated array assembly

    NASA Technical Reports Server (NTRS)

    Williams, B. F.

    1976-01-01

    Manufacturing techniques are evaluated using expenses based on experience and studying basic cost factors for each step to evaluate expenses from a first-principles point of view. A formal cost accounting procedure is developed which is used throughout the study for cost comparisons. The first test of this procedure is a comparison of its predicted costs for array module manufacturing with costs from a study which is based on experience factors. A manufacturing cost estimate for array modules of $10/W is based on present-day manufacturing techniques, expenses, and materials costs.

  16. Systems Maintenance Automated Repair Tasks (SMART)

    NASA Technical Reports Server (NTRS)

    Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek

    2010-01-01

    SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.

  17. Telephone-based Assessments to Minimize Missing Data in Longitudinal Depression Trials: A Project IMPACTS Study Report

    PubMed Central

    Claassen, Cindy; Kurian, Ben; Trivedi, Madhukar H.; Grannemann, Bruce D.; Tuli, Ekta; Pipes, Ronny; Preston, Anne Marie; Flood, Ariell

    2012-01-01

    Purpose Missing data in clinical efficacy and effectiveness trials continue to be a major threat to the validity of study findings. The purpose of this report is to describe methods developed to ensure completion of outcome assessments with public mental health sector subjects participating in a longitudinal, repeated measures study for the treatment of major depressive disorder. We developed longitudinal assessment procedures that included telephone-based clinician interviews in order to minimize missing data commonly encountered with face-to-face assessment procedures. Methods A pre-planned, multi-step strategy was developed to ensure completeness of data collection. The procedure included obtaining multiple pieces of patient contact information at baseline, careful education of both staff and patients concerning the purpose of assessments, establishing good patient rapport, and finally being flexible and persistent with phone appointments to ensure the completion of telephone-based follow-up assessments. A well-developed administrative and organizational structure was also put in place prior to study implementation. Results The assessment completion rate for the primary outcome for 310 of 504 subjects who enrolled and completed 52 weeks (at the time of manuscript) of telephone-based follow-up assessments was 96.8%. Conclusion By utilizing telephone-based follow-up procedures and adapting our easy-to-use pre-defined multi-step approach, researchers can maximize patient data retention in longitudinal studies. PMID:18761427

  18. Reconnaissance On Chi-Square Test Procedure For Determining Two Species Association

    NASA Astrophysics Data System (ADS)

    Marisa, Hanifa

    2008-01-01

    Determining the assosiation of two species by using chi-square test has been published. Utility of this procedure to plants species at certain location, shows that the procedure could not find "ecologically" association. Tens sampling units have been made to record some weeds species in Indralaya, South Sumatera. Chi square test; Xt2 = N[|(ad)-(bc)|-(N/2)]2/mnrs (Eq:1) on two species (Cleome sp and Eleusine indica) of the weeds shows positive assosiation; while ecologically in nature, there is no relationship between them. Some alternatives are proposed to this problem; simplified chi-square test steps, make further study to find out ecologically association, or at last, ignore it.

  19. Deconvolution of complex differential scanning calorimetry profiles for protein transitions under kinetic control.

    PubMed

    Toledo-Núñez, Citlali; Vera-Robles, L Iraís; Arroyo-Maya, Izlia J; Hernández-Arana, Andrés

    2016-09-15

    A frequent outcome in differential scanning calorimetry (DSC) experiments carried out with large proteins is the irreversibility of the observed endothermic effects. In these cases, DSC profiles are analyzed according to methods developed for temperature-induced denaturation transitions occurring under kinetic control. In the one-step irreversible model (native → denatured) the characteristics of the observed single-peaked endotherm depend on the denaturation enthalpy and the temperature dependence of the reaction rate constant, k. Several procedures have been devised to obtain the parameters that determine the variation of k with temperature. Here, we have elaborated on one of these procedures in order to analyze more complex DSC profiles. Synthetic data for a heat capacity curve were generated according to a model with two sequential reactions; the temperature dependence of each of the two rate constants involved was determined, according to the Eyring's equation, by two fixed parameters. It was then shown that our deconvolution procedure, by making use of heat capacity data alone, permits to extract the parameter values that were initially used. Finally, experimental DSC traces showing two and three maxima were analyzed and reproduced with relative success according to two- and four-step sequential models. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Using an intervention mapping approach to develop a discharge protocol for intensive care patients.

    PubMed

    van Mol, Margo; Nijkamp, Marjan; Markham, Christine; Ista, Erwin

    2017-12-19

    Admission into an intensive care unit (ICU) may result in long-term physical, cognitive, and emotional consequences for patients and their relatives. The care of the critically ill patient does not end upon ICU discharge; therefore, integrated and ongoing care during and after transition to the follow-up ward is pivotal. This study described the development of an intervention that responds to this need. Intervention Mapping (IM), a six-step theory- and evidence-based approach, was used to guide intervention development. The first step, a problem analysis, comprised a literature review, six semi-structured telephone interviews with former ICU-patients and their relatives, and seven qualitative roundtable meetings for all eligible nurses (i.e., 135 specialized and 105 general ward nurses). Performance and change objectives were formulated in step two. In step three, theory-based methods and practical applications were selected and directed at the desired behaviors and the identified barriers. Step four designed a revised discharge protocol taking into account existing interventions. Adoption, implementation and evaluation of the new discharge protocol (IM steps five and six) are in progress and were not included in this study. Four former ICU patients and two relatives underlined the importance of the need for effective discharge information and supportive written material. They also reported a lack of knowledge regarding the consequences of ICU admission. 42 ICU and 19 general ward nurses identified benefits and barriers regarding discharge procedures using three vignettes framed by literature. Some discrepancies were found. For example, ICU nurses were skeptical about the impact of writing a lay summary despite extensive evidence of the known benefits for the patients. ICU nurses anticipated having insufficient skills, not knowing the patient well enough, and fearing legal consequences of their writings. The intervention was designed to target the knowledge, attitudes, self-efficacy, and perceived social influence. Building upon IM steps one to three, a concept discharge protocol was developed that is relevant and feasible within current daily practice. Intervention mapping provided a comprehensive framework to improve ICU discharge by guiding the development process of a theory- and empirically-based discharge protocol that is robust and useful in practice.

  1. An efficient matrix-matrix multiplication based antisymmetric tensor contraction engine for general order coupled cluster.

    PubMed

    Hanrath, Michael; Engels-Putzka, Anna

    2010-08-14

    In this paper, we present an efficient implementation of general tensor contractions, which is part of a new coupled-cluster program. The tensor contractions, used to evaluate the residuals in each coupled-cluster iteration are particularly important for the performance of the program. We developed a generic procedure, which carries out contractions of two tensors irrespective of their explicit structure. It can handle coupled-cluster-type expressions of arbitrary excitation level. To make the contraction efficient without loosing flexibility, we use a three-step procedure. First, the data contained in the tensors are rearranged into matrices, then a matrix-matrix multiplication is performed, and finally the result is backtransformed to a tensor. The current implementation is significantly more efficient than previous ones capable of treating arbitrary high excitations.

  2. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  3. An impact of environmental changes on flows in the reach scale under a range of climatic conditions

    NASA Astrophysics Data System (ADS)

    Karamuz, Emilia; Romanowicz, Renata J.

    2016-04-01

    The present paper combines detection and adequate identification of causes of changes in flow regime at cross-sections along the Middle River Vistula reach using different methods. Two main experimental set ups (designs) have been applied to study the changes, a moving three-year window and low- and high-flow event based approach. In the first experiment, a Stochastic Transfer Function (STF) model and a quantile-based statistical analysis of flow patterns were compared. These two methods are based on the analysis of changes of the STF model parameters and standardised differences of flow quantile values. In the second experiment, in addition to the STF-based also a 1-D distributed model, MIKE11 was applied. The first step of the procedure used in the study is to define the river reaches that have recorded information on land use and water management changes. The second task is to perform the moving window analysis of standardised differences of flow quantiles and moving window optimisation of the STF model for flow routing. The third step consists of an optimisation of the STF and MIKE11 models for high- and low-flow events. The final step is to analyse the results and relate the standardised quantile changes and model parameter changes to historical land use changes and water management practices. Results indicate that both models give consistent assessment of changes in the channel for medium and high flows. ACKNOWLEDGEMENTS This research was supported by the Institute of Geophysics Polish Academy of Sciences through the Young Scientist Grant no. 3b/IGF PAN/2015.

  4. The feasibility of an efficient drug design method with high-performance computers.

    PubMed

    Yamashita, Takefumi; Ueda, Akihiko; Mitsui, Takashi; Tomonaga, Atsushi; Matsumoto, Shunji; Kodama, Tatsuhiko; Fujitani, Hideaki

    2015-01-01

    In this study, we propose a supercomputer-assisted drug design approach involving all-atom molecular dynamics (MD)-based binding free energy prediction after the traditional design/selection step. Because this prediction is more accurate than the empirical binding affinity scoring of the traditional approach, the compounds selected by the MD-based prediction should be better drug candidates. In this study, we discuss the applicability of the new approach using two examples. Although the MD-based binding free energy prediction has a huge computational cost, it is feasible with the latest 10 petaflop-scale computer. The supercomputer-assisted drug design approach also involves two important feedback procedures: The first feedback is generated from the MD-based binding free energy prediction step to the drug design step. While the experimental feedback usually provides binding affinities of tens of compounds at one time, the supercomputer allows us to simultaneously obtain the binding free energies of hundreds of compounds. Because the number of calculated binding free energies is sufficiently large, the compounds can be classified into different categories whose properties will aid in the design of the next generation of drug candidates. The second feedback, which occurs from the experiments to the MD simulations, is important to validate the simulation parameters. To demonstrate this, we compare the binding free energies calculated with various force fields to the experimental ones. The results indicate that the prediction will not be very successful, if we use an inaccurate force field. By improving/validating such simulation parameters, the next prediction can be made more accurate.

  5. Cultivating Excellence: A Curriculum for Excellence in School Administration. V. School-Based Management.

    ERIC Educational Resources Information Center

    Lawson, John

    This report is the fifth in a series on cultivating excellence in education for the purpose of training and retraining school leaders of the 1990s. The role of school administrators, and especially building principals; the characteristic administrative functions; the step-by-step procedures for implementation; and the advantages and possible…

  6. Test Procedures for Semiconductor Random Access Memories

    DTIC Science & Technology

    1979-11-01

    of each cell exactly complement to each other, the read operations on the base cell in (g) of step 2 following operations ko S odd and in (p) of step...contents of Sko (these cells this address. Furthermore, when more than one contained I at test time and even if the con- cell is accessed then the output

  7. Manpower Information Manual. A Manual for Local Planning.

    ERIC Educational Resources Information Center

    Allred, Marcus D.; Myers, Christine F.

    The step-by-step procedures contained in this manual are intended to develop a simple information system that can be used to collect and process the best possible factual data on the manpower needs of the community served by an educational institution, so that long-range planning of vocational curriculum and guidance can be based on what the jobs…

  8. Stochastic modeling of filtrate alkalinity in water filtration devices: Transport through micro/nano porous clay based ceramic materials

    USDA-ARS?s Scientific Manuscript database

    Clay and plant materials such as wood are the raw materials used in manufacture of ceramic water filtration devices around the world. A step by step manufacturing procedure which includes initial mixing, molding and sintering is used. The manufactured ceramic filters have numerous pores which help i...

  9. Optimized synthesis of phosphorothioate oligodeoxyribonucleotides substituted with a 5′-protected thiol function and a 3′-amino group

    PubMed Central

    Aubert, Yves; Bourgerie, Sylvain; Meunier, Laurent; Mayer, Roger; Roche, Annie-Claude; Monsigny, Michel; Thuong, Nguyen T.; Asseline, Ulysse

    2000-01-01

    A new deprotection procedure enables a medium scale preparation of phosphodiester and phosphorothioate oligonucleotides substituted with a protected thiol function at their 5′-ends and an amino group at their 3′-ends in good yield (up to 72 OD units/µmol for a 19mer phosphorothioate). Syntheses of 3′-amino-substituted oligonucleotides were carried out on a modified support. A linker containing the thioacetyl moiety was manually coupled in two steps by first adding its phosphoramidite derivative in the presence of tetrazole followed by either oxidation or sulfurization to afford the bis-derivatized oligonucleotide bound to the support. Deprotection was achieved by treating the fully protected oligonucleotide with a mixture of 2,2′-dithiodipyridine and concentrated aqueous ammonia in the presence of phenol and methanol. This procedure enables (i) cleavage of the oligonucleotide from the support, releasing the oligonucleotide with a free amino group at its 3′-end, (ii) deprotection of the phosphate groups and the amino functions of the nucleic bases, as well as (iii) transformation of the 5′-terminal S-acetyl function into a dithiopyridyl group. The bis-derivatized phosphorothioate oligomer was further substituted through a two-step procedure: first, the 3′-amino group was reacted with fluorescein isothiocyanate to yield a fluoresceinylated oligonucleotide; the 5′-dithiopyridyl group was then quantitatively reduced to give a free thiol group which was then substituted by reaction with an Nα-bromoacetyl derivative of a signal peptide containing a KDEL sequence to afford a fluoresceinylated peptide–oligonucleotide conjugate. PMID:10637335

  10. Linezolid in late-chronic prosthetic joint infection caused by gram-positive bacteria.

    PubMed

    Cobo, Javier; Lora-Tamayo, Jaime; Euba, Gorane; Jover-Sáenz, Alfredo; Palomino, Julián; del Toro, Ma Dolores; Rodríguez-Pardo, Dolors; Riera, Melchor; Ariza, Javier

    2013-05-01

    Linezolid may be an interesting alternative for prosthetic joint infection (PJI) due to its bioavailability and its antimicrobial spectrum. However, experience in this setting is scarce. The aim of the study was to assess linezolid's clinical and microbiological efficacy, and also its tolerance. This was a prospective, multicenter, open-label, non-comparative study of 25 patients with late-chronic PJI caused by Gram-positive bacteria managed with a two-step exchange procedure plus 6 weeks of linezolid. Twenty-two (88%) patients tolerated linezolid without major adverse effects, although a global decrease in the platelet count was observed. Three patients were withdrawn because of major toxicity, which reversed after linezolid stoppage. Among patients who completed treatment, 19 (86%) demonstrated clinical and microbiological cure. Two patients presented with clinical and microbiological failure, and one showed clinical cure and microbiological failure. In conclusion, linezolid showed good results in chronic PJI managed with a two-step exchange procedure. Tolerance seems acceptable, though close surveillance is required. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment

    PubMed Central

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021

  12. An information-theoretic approach for the evaluation of surrogate endpoints based on causal inference.

    PubMed

    Alonso, Ariel; Van der Elst, Wim; Molenberghs, Geert; Buyse, Marc; Burzykowski, Tomasz

    2016-09-01

    In this work a new metric of surrogacy, the so-called individual causal association (ICA), is introduced using information-theoretic concepts and a causal inference model for a binary surrogate and true endpoint. The ICA has a simple and appealing interpretation in terms of uncertainty reduction and, in some scenarios, it seems to provide a more coherent assessment of the validity of a surrogate than existing measures. The identifiability issues are tackled using a two-step procedure. In the first step, the region of the parametric space of the distribution of the potential outcomes, compatible with the data at hand, is geometrically characterized. Further, in a second step, a Monte Carlo approach is proposed to study the behavior of the ICA on the previous region. The method is illustrated using data from the Collaborative Initial Glaucoma Treatment Study. A newly developed and user-friendly R package Surrogate is provided to carry out the evaluation exercise. © 2016, The International Biometric Society.

  13. Control of PbI2 nucleation and crystallization: towards efficient perovskite solar cells based on vapor-assisted solution process

    NASA Astrophysics Data System (ADS)

    Yang, Chongqiu; Peng, Yanke; Simon, Terrence; Cui, Tianhong

    2018-04-01

    Perovskite solar cells (PSC) have outstanding potential to be low-cost, high-efficiency photovoltaic devices. The PSC can be fabricated by numerous techniques; however, the power conversion efficiency (PCE) for the two-step-processed PSC falls behind that of the one-step method. In this work, we investigate the effects of relative humidity (RH) and dry air flow on the lead iodide (PbI2) solution deposition process. We conclude that the quality of the PbI2 film is critical to the development of the perovskite film and the performance of the PSC device. Low RH and dry air flow used during the PbI2 spin coating procedure can increase supersaturation concentration to form denser PbI2 nuclei and a more suitable PbI2 film. Moreover, airflow-assisted PbI2 drying and thermal annealing steps can smooth transformation from the nucleation stage to the crystallization stage.

  14. Brick tunnel randomization and the momentum of the probability mass.

    PubMed

    Kuznetsova, Olga M

    2015-12-30

    The allocation space of an unequal-allocation permuted block randomization can be quite wide. The development of unequal-allocation procedures with a narrower allocation space, however, is complicated by the need to preserve the unconditional allocation ratio at every step (the allocation ratio preserving (ARP) property). When the allocation paths are depicted on the K-dimensional unitary grid, where allocation to the l-th treatment is represented by a step along the l-th axis, l = 1 to K, the ARP property can be expressed in terms of the center of the probability mass after i allocations. Specifically, for an ARP allocation procedure that randomizes subjects to K treatment groups in w1 :⋯:wK ratio, w1 +⋯+wK =1, the coordinates of the center of the mass are (w1 i,…,wK i). In this paper, the momentum with respect to the center of the probability mass (expected imbalance in treatment assignments) is used to compare ARP procedures in how closely they approximate the target allocation ratio. It is shown that the two-arm and three-arm brick tunnel randomizations (BTR) are the ARP allocation procedures with the tightest allocation space among all allocation procedures with the same allocation ratio; the two-arm BTR is the minimum-momentum two-arm ARP allocation procedure. Resident probabilities of two-arm and three-arm BTR are analytically derived from the coordinates of the center of the probability mass; the existence of the respective transition probabilities is proven. Probability of deterministic assignments with BTR is found generally acceptable. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  16. Portable Raman monitoring of modern cleaning and consolidation operations of artworks on mineral supports.

    PubMed

    Martínez-Arkarazo, I; Sarmiento, A; Maguregui, M; Castro, K; Madariaga, J M

    2010-08-01

    Any restoration performed on cultural heritage artworks must guarantee a low impact on the treated surfaces. Although completely risk-free methods do not exist, the use of tailor-made procedures and the continuous monitoring by portable instrumentation is surely one of the best approaches to conduct a modern restoration process. In this work, a portable Raman monitoring, combined sometimes with spectroscopic techniques providing the elemental composition, is the key analysis technique in the three-step restoration protocol proposed: (a) in situ analysis of the surface to be treated (original composition and degradation products/pollutants) and the cleaning agents used as extractants, (b) the thermodynamic study of the species involved in the treatment in order to design a suitable restoration method and (c) application and monitoring of the treatment. Two cleaning operations based on new technologies were studied and applied to two artworks on mineral supports: a wall painting affected by nitrate impact, and a black crusted stone (chalk) altarpiece. Raman bands of nitrate and gypsum, respectively, decreased after the step-by-step operations in each case, which helped restorers to decide when the treatment was concluded, thus avoiding any further damage to the treated surface of the artworks.

  17. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vile, D; Zhang, L; Cuttino, L

    2016-06-15

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity.more » These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.« less

  18. Alternator insulation evaluation tests

    NASA Technical Reports Server (NTRS)

    Penn, W. B.; Schaefer, R. F.; Balke, R. L.

    1972-01-01

    Tests were conducted to predict the remaining electrical insulation life of a 60 KW homopolar inductor alternator following completion of NASA turbo-alternator endurance tests for SNAP-8 space electrical power systems application. The insulation quality was established for two alternators following completion of these tests. A step-temperature aging test procedure was developed for insulation life prediction and applied to one of the two alternators. Armature winding insulation life of over 80,000 hours for an average winding temperature of 248 degrees C was predicted using the developed procedure.

  19. Zero-Based Budgeting Redux.

    ERIC Educational Resources Information Center

    Geiger, Philip E.

    1993-01-01

    Zero-based, programmatic budgeting involves four basic steps: (1) define what needs to be done; (2) specify the resources required; (3) determine the assessment procedures and standards to use in evaluating the effectiveness of various programs; and (4) assign dollar figures to this information. (MLF)

  20. Optical properties of m-plane GaN grown on patterned Si(112) substrates by MOCVD using a two-step approach

    NASA Astrophysics Data System (ADS)

    Izyumskaya, N.; Okur, S.; Zhang, F.; Monavarian, M.; Avrutin, V.; Özgür, Ü.; Metzner, S.; Karbaum, C.; Bertram, F.; Christen, J.; Morkoç, H.

    2014-03-01

    Nonpolar m-plane GaN layers were grown on patterned Si (112) substrates by metal-organic chemical vapor deposition (MOCVD). A two-step growth procedure involving a low-pressure (30 Torr) first step to ensure formation of the m-plane facet and a high-pressure step (200 Torr) for improvement of optical quality was employed. The layers grown in two steps show improvement of the optical quality: the near-bandedge photoluminescence (PL) intensity is about 3 times higher than that for the layers grown at low pressure, and deep emission is considerably weaker. However, emission intensity from m-GaN is still lower than that of polar and semipolar (1 100 ) reference samples grown under the same conditions. To shed light on this problem, spatial distribution of optical emission over the c+ and c- wings of the nonpolar GaN/Si was studied by spatially resolved cathodoluminescence and near-field scanning optical microscopy.

  1. Unsupervised color image segmentation using a lattice algebra clustering technique

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  2. Space-Based Identification of Archaeological Illegal Excavations and a New Automatic Method for Looting Feature Extraction in Desert Areas

    NASA Astrophysics Data System (ADS)

    Lasaponara, Rosa; Masini, Nicola

    2018-06-01

    The identification and quantification of disturbance of archaeological sites has been generally approached by visual inspection of optical aerial or satellite pictures. In this paper, we briefly summarize the state of the art of the traditionally satellite-based approaches for looting identification and propose a new automatic method for archaeological looting feature extraction approach (ALFEA). It is based on three steps: the enhancement using spatial autocorrelation, unsupervised classification, and segmentation. ALFEA has been applied to Google Earth images of two test areas, selected in desert environs in Syria (Dura Europos), and in Peru (Cahuachi-Nasca). The reliability of ALFEA was assessed through field surveys in Peru and visual inspection for the Syrian case study. Results from the evaluation procedure showed satisfactory performance from both of the two analysed test cases with a rate of success higher than 90%.

  3. Improving Pediatric Basic Life Support Performance Through Blended Learning With Web-Based Virtual Patients: Randomized Controlled Trial.

    PubMed

    Lehmann, Ronny; Thiessen, Christiane; Frick, Barbara; Bosse, Hans Martin; Nikendei, Christoph; Hoffmann, Georg Friedrich; Tönshoff, Burkhard; Huwendiek, Sören

    2015-07-02

    E-learning and blended learning approaches gain more and more popularity in emergency medicine curricula. So far, little data is available on the impact of such approaches on procedural learning and skill acquisition and their comparison with traditional approaches. This study investigated the impact of a blended learning approach, including Web-based virtual patients (VPs) and standard pediatric basic life support (PBLS) training, on procedural knowledge, objective performance, and self-assessment. A total of 57 medical students were randomly assigned to an intervention group (n=30) and a control group (n=27). Both groups received paper handouts in preparation of simulation-based PBLS training. The intervention group additionally completed two Web-based VPs with embedded video clips. Measurements were taken at randomization (t0), after the preparation period (t1), and after hands-on training (t2). Clinical decision-making skills and procedural knowledge were assessed at t0 and t1. PBLS performance was scored regarding adherence to the correct algorithm, conformance to temporal demands, and the quality of procedural steps at t1 and t2. Participants' self-assessments were recorded in all three measurements. Procedural knowledge of the intervention group was significantly superior to that of the control group at t1. At t2, the intervention group showed significantly better adherence to the algorithm and temporal demands, and better procedural quality of PBLS in objective measures than did the control group. These aspects differed between the groups even at t1 (after VPs, prior to practical training). Self-assessments differed significantly only at t1 in favor of the intervention group. Training with VPs combined with hands-on training improves PBLS performance as judged by objective measures.

  4. Rapid, specific determination of iodine and iodide by combined solid-phase extraction/diffuse reflectance spectroscopy

    NASA Technical Reports Server (NTRS)

    Arena, Matteo P.; Porter, Marc D.; Fritz, James S.

    2002-01-01

    A new, rapid methodology for trace analysis using solid-phase extraction is described. The two-step methodology is based on the concentration of an analyte onto a membrane disk and on the determination by diffuse reflectance spectroscopy of the amount of analyte extracted on the disk surface. This method, which is adaptable to a wide range of analytes, has been used for monitoring ppm levels of iodine and iodide in spacecraft water. Iodine is used as a biocide in spacecraft water. For these determinations, a water sample is passed through a membrane disk by means of a 10-mL syringe that is attached to a disk holder assembly. The disk, which is a polystyrene-divinylbenzene composite, is impregnated with poly(vinylpyrrolidone) (PVP), which exhaustively concentrates iodine as a yellow iodine-PVP complex. The amount of concentrated iodine is then determined in only 2 s by using a hand-held diffuse reflectance spectrometer by comparing the result with a calibration curve based on the Kubelka-Munk function. The same general procedure can be used to determine iodide levels after its facile and exhaustive oxidation to iodine by peroxymonosulfate (i.e., Oxone reagent). For samples containing both analytes, a two-step procedure can be used in which the iodide concentration is calculated from the difference in iodine levels before and after treatment of the sample with peroxymonosulfate. With this methodology, iodine and iodide levels in the 0.1-5.0 ppm range can be determined with a total workup time of approximately 60 s with a RSD of approximately 6%.

  5. Numerical difficulties and computational procedures for thermo-hydro-mechanical coupled problems of saturated porous media

    NASA Astrophysics Data System (ADS)

    Simoni, L.; Secchi, S.; Schrefler, B. A.

    2008-12-01

    This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.

  6. Radiometric and spectral stray light correction for the portable remote imaging spectrometer (PRISM) coastal ocean sensor

    NASA Astrophysics Data System (ADS)

    Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.

    2017-09-01

    The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.

  7. Protein complex purification from Thermoplasma acidophilum using a phage display library.

    PubMed

    Hubert, Agnes; Mitani, Yasuo; Tamura, Tomohiro; Boicu, Marius; Nagy, István

    2014-03-01

    We developed a novel protein complex isolation method using a single-chain variable fragment (scFv) based phage display library in a two-step purification procedure. We adapted the antibody-based phage display technology which has been developed for single target proteins to a protein mixture containing about 300 proteins, mostly subunits of Thermoplasma acidophilum complexes. T. acidophilum protein specific phages were selected and corresponding scFvs were expressed in Escherichia coli. E. coli cell lysate containing the expressed His-tagged scFv specific against one antigen protein and T. acidophilum crude cell lysate containing intact target protein complexes were mixed, incubated and subjected to protein purification using affinity and size exclusion chromatography steps. This method was confirmed to isolate intact particles of thermosome and proteasome suitable for electron microscopy analysis and provides a novel protein complex isolation strategy applicable to organisms where no genetic tools are available. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. From the Boltzmann to the Lattice-Boltzmann Equation:. Beyond BGK Collision Models

    NASA Astrophysics Data System (ADS)

    Philippi, Paulo Cesar; Hegele, Luiz Adolfo; Surmas, Rodrigo; Siebert, Diogo Nardelli; Dos Santos, Luís Orlando Emerich

    In this work, we present a derivation for the lattice-Boltzmann equation directly from the linearized Boltzmann equation, combining the following main features: multiple relaxation times and thermodynamic consistency in the description of non isothermal compressible flows. The method presented here is based on the discretization of increasingly order kinetic models of the Boltzmann equation. Following a Gross-Jackson procedure, the linearized collision term is developed in Hermite polynomial tensors and the resulting infinite series is diagonalized after a chosen integer N, establishing the order of approximation of the collision term. The velocity space is discretized, in accordance with a quadrature method based on prescribed abscissas (Philippi et al., Phys. Rev E 73, 056702, 2006). The problem of describing the energy transfer is discussed, in relation with the order of approximation of a two relaxation-times lattice Boltzmann model. The velocity-step, temperature-step and the shock tube problems are investigated, adopting lattices with 37, 53 and 81 velocities.

  9. Synthesis of fluorescent carbon dots by a microwave heating process: structural characterization and cell imaging applications

    NASA Astrophysics Data System (ADS)

    Stefanakis, Dimitrios; Philippidis, Aggelos; Sygellou, Labrini; Filippidis, George; Ghanotakis, Demetrios; Anglos, Demetrios

    2014-10-01

    Two types of highly fluorescent carbon dots (C-dots) were prepared by a single-step procedure based on microwave heating citric acid and 6-aminocaproic acid or citric acid and urea in an aqueous solution. The small size of the isolated carbon dots along with their strong absorption in the UV and their excitation wavelength-dependent fluorescence render them ideal nanomaterials for biomedical applications (imaging and sensing). The structure and properties of the two types of C-dot materials were studied using a series of spectroscopic techniques. The ability of the C-dots to be internalized by HeLa cells was demonstrated via 3-photon fluorescence microscopy imaging.

  10. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  11. Improving liquid chromatography-tandem mass spectrometry determinations by modifying noise frequency spectrum between two consecutive wavelet-based low-pass filtering procedures.

    PubMed

    Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien

    2010-04-23

    This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. Copyright 2010 Elsevier B.V. All rights reserved.

  12. [Cryopreservation of mouse embryos in ethylene glycol-based solutions: a search for the optimal and simple protocols].

    PubMed

    Luo, Ming-Jiu; Liu, Na; Miao, De-Qiang; Lan, Guo-Cheng; Suo-Feng; Chang, Zhong-Le; Tan, Jing-He

    2005-09-01

    Although ethylene glycol (EG) has been widely used for embryo cryopreservation in domestic animals, few attempts were made to use this molecule to freeze mouse and human embryos. In the few studies that used EG for slow-freezing of mouse and human embryos, complicated protocols for human embryos were used, and the protocols need to be simplified. Besides, freezing mouse morula with EG as a cryoprotectant has not been reported. In this paper, we studied the effects of embryo stages, EG concentration, duration and procedure of equilibration, sucrose supplementation and EG removal after thawing on the development of thawed mouse embryos, using the simple freezing and thawing procedures for bovine embryos. The blastulation and hatching rates (81.92% +/- 2.24% and 68.56% +/- 2.43%, respectively) of the thawed late compact morulae were significantly (P < 0.05) higher than those of embryos frozen-thawed at other stages. When mouse late compact morulae were frozen with different concentrations of EG, the highest rates of blastocyst formation and hatching were obtained with 1.8mol/L EG. The blastulation rate was significantly higher when late morulae were equilibrated in 1.8 mol/L EG for 10 min prior to freezing than when they were equilibrated for 30 min, and the hatching rate of embryos exposed to EG for 10 min was significantly higher than that of embryos exposed for 20 and 30 min. Both rates of blastocyst formation and hatching obtained with two-step equilibration were higher (P < 0.05) than with one-step equilibration in 1.8 mol/L EG. Addition of sucrose to the EG-based solution had no beneficial effects. On the contrary, an increased sucrose level (0.4 mol/L) in the solution impaired the development of the frozen-thawed embryos. In contrast, addition of 0.1 mol/L sucrose to the propylene glycol (PG)-based solution significantly improved the development of the frozen-thawed embryos. Elimination of the cryoprotectant after thawing did not improve the development of the thawed embryos. The cell numbers were less (P < 0.05) in blastocysts developed from the thawed morulae than in the in vivo derived ones. In summary, embryo stage, EG concentration, duration and procedure of equilibration and sucrose supplementation had marked effects on development of the thawed mouse embryos, and a protocol for cryopreservation of mouse embryos is recommended in which the late morulae are frozen in 1.8 mol/L EG using the simple freezing and thawing procedures of bovine embryos after a two-step equilibration and the embryos can be cultured or transferred without EG removal after thawing.

  13. Convergent solid-phase synthesis of hirudin.

    PubMed

    Goulas, Spyros; Gatos, Dimitrios; Barlos, Kleomenis

    2006-02-01

    Hirudin variant 1 (HV1), a small protein consisting of 65 amino acids and three disulfide bonds, was synthesized by using Fmoc-based convergent methods on 2-chlorotrityl resin (CLTR). The linear sequence was assembled by the sequential condensation of 7 protected fragments, on the resin-bound 55-65 fragment. The conditions of fragment assembly were carefully studied to determine the most efficient synthetic protocol. Crude reduced [Cys(16, 28)(Acm)]-HV1 thus obtained was easily purified to homogeneity by RP-HPLC. Disulfide bridges were successfully formed by a two-step procedure, involving an oxidative folding step to form Cys(6)-Cys(14) and Cys(22)-Cys(39) linkages, followed by iodine oxidation to form the Cys(16)-Cys(28) bond. The correct disulfide bond alignment was established by peptide mapping using Staphylococcus aureus V8 protease at pH 4.5.

  14. Modular Training for Robot-Assisted Radical Prostatectomy: Where to Begin?

    PubMed

    Lovegrove, Catherine; Ahmed, Kamran; Novara, Giacomo; Guru, Khurshid; Mottrie, Alex; Challacombe, Ben; der Poel, Henk Van; Peabody, James; Dasgupta, Prokar

    Effective training is paramount for patient safety. Modular training entails advancing through surgical steps of increasing difficulty. This study aimed to construct a modular training pathway for use in robot-assisted radical prostatectomy (RARP). It aims to identify the sequence of procedural steps that are learnt before surgeons are able to perform a full procedure without an intervention from mentor. This is a multi-institutional, prospective, observational, longitudinal study. We used a validated training tool (RARP Score). Data regarding surgeons' stage of training and progress were collected for analysis. A modular training pathway was constructed with consensus on the level of difficulty and evaluation of individual steps. We identified and recorded the sequence of steps performed by fellows during their learning curves. We included 15 urology fellows from UK, Europe, and Australia. A total of 15 surgeons were assessed by mentors in 425 RARP cases over 8 months (range: 7-79) across 15 international centers. There were substantial differences in the sequence of RARP steps according to the chronology of the procedure, difficulty level, and the order in which surgeons actually learned steps. Steps were not attempted in chronological order. The greater the difficulty, the later the cohort first undertook the step (p = 0.021). The cohort undertook steps of difficulty level I at median case number 1. Steps of difficulty levels II, III, and IV showed more variation in median case number of the first attempt. We recommend that, in the operating theater, steps be learned in order of increasing difficulty. A new modular training route has been designed. This incorporates the steps of RARP with the following order of priority: difficulty level > median case number of first attempt > most frequently undertaken in surgical training. An evidence-based modular training pathway has been developed that facilitates a safe introduction to RARP for novice surgeons. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Fluoroscopy guided percutaneous renal access in prone position

    PubMed Central

    Sharma, Gyanendra R; Maheshwari, Pankaj N; Sharma, Anshu G; Maheshwari, Reeta P; Heda, Ritwik S; Maheshwari, Sakshi P

    2015-01-01

    Percutaneous nephrolithotomy is a very commonly done procedure for management of renal calculus disease. Establishing a good access is the first and probably the most crucial step of this procedure. A proper access is the gateway to success. However, this crucial step has the steepest learning curve for, in a fluoroscopy guided access, it involves visualizing a three dimensional anatomy on a two dimensional fluoroscopy screen. This review describes the anatomical basis of the renal access. It provides a literature review of all aspects of percutaneous renal access along with the advances that have taken place in this field over the years. The article describes a technique to determine the site of skin puncture, the angle and depth of puncture using a simple mathematical principle. It also reviews the common problems faced during the process of puncture and dilatation and describes the ways to overcome them. The aim of this article is to provide the reader a step by step guide for percutaneous renal access. PMID:25789297

  16. Dynamic analysis method for prevention of failure in the first-stage low-pressure turbine blade with two-finger root

    NASA Astrophysics Data System (ADS)

    Park, Jung-Yong; Jung, Yong-Keun; Park, Jong-Jin; Kang, Yong-Ho

    2002-05-01

    Failures of turbine blades are identified as the leading causes of unplanned outages for steam turbine. Accidents of low-pressure turbine blade occupied more than 70 percent in turbine components. Therefore, the prevention of failures for low pressure turbine blades is certainly needed. The procedure is illustrated by the case study. This procedure is used to guide, and support the plant manager's decisions to avoid a costly, unplanned outage. In this study, we are trying to find factors of failures in LP turbine blade and to make three steps to approach the solution of blade failure. First step is to measure natural frequency in mockup test and to compare it with nozzle passing frequency. Second step is to use FEM and to calculate the natural frequencies of 7 blades and 10 blades per group in BLADE code. Third step is to find natural frequencies of grouped blade off the nozzle passing frequency.

  17. Three children with autism spectrum disorder learn to perform a three-step communication sequence using an iPad®-based speech-generating device.

    PubMed

    Waddington, Hannah; Sigafoos, Jeff; Lancioni, Giulio E; O'Reilly, Mark F; van der Meer, Larah; Carnett, Amarie; Stevens, Michelle; Roche, Laura; Hodis, Flaviu; Green, Vanessa A; Sutherland, Dean; Lang, Russell; Marschik, Peter B

    2014-12-01

    Many children with autism spectrum disorder (ASD) have limited or absent speech and might therefore benefit from learning to use a speech-generating device (SGD). The purpose of this study was to evaluate a procedure aimed at teaching three children with ASD to use an iPad(®)-based SGD to make a general request for access to toys, then make a specific request for one of two toys, and then communicate a thank-you response after receiving the requested toy. A multiple-baseline across participants design was used to determine whether systematic instruction involving least-to-most-prompting, time delay, error correction, and reinforcement was effective in teaching the three children to engage in this requesting and social communication sequence. Generalization and follow-up probes were conducted for two of the three participants. With intervention, all three children showed improvement in performing the communication sequence. This improvement was maintained with an unfamiliar communication partner and during the follow-up sessions. With systematic instruction, children with ASD and severe communication impairment can learn to use an iPad-based SGD to complete multi-step communication sequences that involve requesting and social communication functions. Copyright © 2014 ISDN. Published by Elsevier Ltd. All rights reserved.

  18. A templated approach for multi-physics modeling of hybrid energy systems in Modelica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwood, Michael Scott; Cetiner, Sacit M.; Harrison, Thomas J.

    A prototypical hybrid energy system (HES) couples a primary thermal power generator (i.e., nuclear power plant) with one or more additional subsystems beyond the traditional balance of plant electricity generation system. The definition and architecture of an HES can be adapted based on the needs and opportunities of a given local market. For example, locations in need of potable water may be best served by coupling a desalination plant to the HES. A location near an oil refinery may have a need for emission-free hydrogen production. The flexible, multidomain capabilities of Modelica are being used to investigate the dynamics (e.g.,more » thermal hydraulics and electrical generation/consumption) of such a hybrid system. This paper examines the simulation infrastructure created to enable the coupling of multiphysics subsystem models for HES studies. A demonstration of a tightly coupled nuclear hybrid energy system implemented using the Modelica based infrastructure is presented for two representative cases. An appendix is also included providing a step-by-step procedure for using the template-based infrastructure.« less

  19. A procedure of landscape services assessment based on mosaics of patches and boundaries.

    PubMed

    Martín de Agar, Pilar; Ortega, Marta; de Pablo, Carlos L

    2016-09-15

    We develop a procedure for assessing the environmental value of landscape mosaics that simultaneously considers the values of land use patches and the values of the boundaries between them. These boundaries indicate the ecological interactions between the patches. A landscape mosaic is defined as a set of patches and the boundaries between them and corresponds to a spatial pattern of ecological interactions. The procedure is performed in two steps: (i) an environmental assessment of land use patches by means of a function that integrates values based on the goods and services the patches provide, and (ii) an environmental valuation of mosaics using a function that integrates the environmental values of their patches and the types and frequencies of the boundaries between them. This procedure allows us to measure how changes in land uses or in their spatial arrangement cause variations in the environmental value of landscape mosaics and therefore in that of the whole landscape. The procedure was tested in the Sierra Norte of Madrid (central Spain). The results show that the environmental values of the landscape depend not only on the land use patches but also on the values associated with the pattern of the boundaries within the mosaics. The results also highlight the importance of the boundaries between land use patches as determinants of the goods and services provided by the landscape. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. How robotic-assisted surgery can decrease the risk of mucosal tear during Heller myotomy procedure?

    PubMed

    Ballouhey, Quentin; Dib, Nabil; Binet, Aurélien; Carcauzon-Couvrat, Véronique; Clermidi, Pauline; Longis, Bernard; Lardy, Hubert; Languepin, Jane; Cros, Jérôme; Fourcade, Laurent

    2017-06-01

    We report the first description of robotic-assisted Heller myotomy in children. The purpose of this study was to improve the safety of Heller myotomy by demonstrating, in two adolescent patients, the contribution of the robot to the different steps of this procedure. Due to the robot's freedom of movement and three-dimensional vision, there was an improvement in the accuracy, a gain in the safety regarding different key-points, decreasing the risk of mucosal perforation associated with this procedure.

  1. Waveform distortion by 2-step modeling ground vibration from trains

    NASA Astrophysics Data System (ADS)

    Wang, F.; Chen, W.; Zhang, J.; Li, F.; Liu, H.; Chen, X.; Pan, Y.; Li, G.; Xiao, F.

    2017-10-01

    The 2-step procedure is widely used in numerical research on ground vibrations from trains. The ground is inconsistently represented by a simplified model in the first step and by a refined model in the second step, which may lead to distortions in the simulation results. In order to reveal this modeling error, time histories of ground-borne vibrations were computed based on the 2-step procedure and then compared with the results from a benchmark procedure of the whole system. All parameters involved were intentionally set as equal for the 2 methods, which ensures that differences in the results originated from the inconsistencies of the ground model. Excited by wheel loads of low speeds such as 60 km/h and low frequencies less than 8 Hz, the computed responses of the subgrade were quite close to the benchmarks. However, notable distortions were found in all loading cases at higher frequencies. Moreover, significant underestimation of intensity occurred when load frequencies equaled 16 Hz. This occurred not only at the subgrade but also at the points 10 m and 20 m away from the track. When the load speed was increased to 350 km/h, all computed waveforms were distorted, including the responses to the loads at very low frequencies. The modeling error found herein suggests that the ground models in the 2 steps should be calibrated in terms of frequency bands to be investigated, and the speed of train should be taken into account at the same time.

  2. Practical implementation of the double linear damage rule and damage curve approach for treating cumulative fatigue damage

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1980-01-01

    Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.

  3. Protein composition of wheat gluten polymer fractions determined by quantitative two-dimensional gel electrophoresis and tandem mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    Flour proteins from the US bread wheat Butte 86 were extracted in 0.5% SDS using a two-step procedure with and without sonication and further separated by size exclusion chromatography into monomeric and polymeric fractions. Proteins in each fraction were analyzed by quantitative two-dimensional gel...

  4. TU-D-201-07: Severity Indication in High Dose Rate Brachytherapy Emergency Response Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, K; Rustad, F

    Purpose: Understanding the corresponding dose to different staff during the High Dose Rate (HDR) Brachytherapy emergency response procedure could help to develop a strategy in efficiency and effective action. In this study, the variation and risk analysis methodology was developed to simulation the HDR emergency response procedure based on severity indicator. Methods: A GammaMedplus iX HDR unit from Varian Medical System was used for this simulation. The emergency response procedure was decomposed based on risk management methods. Severity indexes were used to identify the impact of a risk occurrence on the step including dose to patient and dose to operationmore » staff by varying the time, HDR source activity, distance from the source to patient and staff and the actions. These actions in 7 steps were to press the interrupt button, press emergency shutoff switch, press emergency button on the afterloader keypad, turn emergency hand-crank, remove applicator from the patient, disconnect transfer tube and move afterloader from the patient, and execute emergency surgical recovery. Results: Given the accumulated time in second at the assumed 7 steps were 15, 5, 30, 15, 180, 120, 1800, and the dose rate of HDR source is 10 Ci, the accumulated dose in cGy to patient at 1cm distance were 188, 250, 625, 813, 3063, 4563 and 27063, and the accumulated exposure in rem to operator at outside the vault, 1m and 10cm distance were 0.0, 0.0, 0.1, 0.1, 22.6, 37.6 and 262.6. The variation was determined by the operators in action at different time and distance from the HDR source. Conclusion: The time and dose were estimated for a HDR unit emergency response procedure. It provided information in making optimal decision during the emergency procedure. Further investigation would be to optimize and standardize the responses for other emergency procedure by time-spatial-dose severity function.« less

  5. Wide brick tunnel randomization - an unequal allocation procedure that limits the imbalance in treatment totals.

    PubMed

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2014-04-30

    In open-label studies, partial predictability of permuted block randomization provides potential for selection bias. To lessen the selection bias in two-arm studies with equal allocation, a number of allocation procedures that limit the imbalance in treatment totals at a pre-specified level but do not require the exact balance at the ends of the blocks were developed. In studies with unequal allocation, however, the task of designing a randomization procedure that sets a pre-specified limit on imbalance in group totals is not resolved. Existing allocation procedures either do not preserve the allocation ratio at every allocation or do not include all allocation sequences that comply with the pre-specified imbalance threshold. Kuznetsova and Tymofyeyev described the brick tunnel randomization for studies with unequal allocation that preserves the allocation ratio at every step and, in the two-arm case, includes all sequences that satisfy the smallest possible imbalance threshold. This article introduces wide brick tunnel randomization for studies with unequal allocation that allows all allocation sequences with imbalance not exceeding any pre-specified threshold while preserving the allocation ratio at every step. In open-label studies, allowing a larger imbalance in treatment totals lowers selection bias because of the predictability of treatment assignments. The applications of the technique in two-arm and multi-arm open-label studies with unequal allocation are described. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Bonding effectiveness of self-etch adhesives to dentin after 24 h water storage.

    PubMed

    Sarr, Mouhamed; Benoist, Fatou Leye; Bane, Khaly; Aidara, Adjaratou Wakha; Seck, Anta; Toure, Babacar

    2018-01-01

    This study evaluated the immediate bonding effectiveness of five self-etch adhesive systems bonded to dentin. The microtensile bond strength of five self-etch adhesives systems, including one two-step and four one-step self-etch adhesives to dentin, was measured. Human third molars had their superficial dentin surface exposed, after which a standardized smear layer was produced using a medium-grit diamond bur. The selected adhesives were applied according to their respective manufacturer's instructions for μTBS measurement after storage in water at 37°C for 24 h. The μTBS varied from 11.1 to 44.3 MPa; the highest bond strength was obtained with the two-step self-etch adhesive Clearfil SE Bond and the lowest with the one-step self-etch adhesive Adper Prompt L-Pop. Pretesting failures mainly occurring during sectioning with the slow-speed diamond saw were observed only with the one-step self-etch adhesive Adper Prompt L-Pop (4 out of 18). When bonded to dentin, the self-etch adhesives with simplified application procedures (one-step self-etch adhesives) still underperform as compared to the two-step self-etch adhesive Clearfil SE Bond.

  7. Bonding effectiveness of self-etch adhesives to dentin after 24 h water storage

    PubMed Central

    Sarr, Mouhamed; Benoist, Fatou Leye; Bane, Khaly; Aidara, Adjaratou Wakha; Seck, Anta; Toure, Babacar

    2018-01-01

    Purpose: This study evaluated the immediate bonding effectiveness of five self-etch adhesive systems bonded to dentin. Materials and Methods: The microtensile bond strength of five self-etch adhesives systems, including one two-step and four one-step self-etch adhesives to dentin, was measured. Human third molars had their superficial dentin surface exposed, after which a standardized smear layer was produced using a medium-grit diamond bur. The selected adhesives were applied according to their respective manufacturer's instructions for μTBS measurement after storage in water at 37°C for 24 h. Results: The μTBS varied from 11.1 to 44.3 MPa; the highest bond strength was obtained with the two-step self-etch adhesive Clearfil SE Bond and the lowest with the one-step self-etch adhesive Adper Prompt L-Pop. Pretesting failures mainly occurring during sectioning with the slow-speed diamond saw were observed only with the one-step self-etch adhesive Adper Prompt L-Pop (4 out of 18). Conclusions: When bonded to dentin, the self-etch adhesives with simplified application procedures (one-step self-etch adhesives) still underperform as compared to the two-step self-etch adhesive Clearfil SE Bond. PMID:29674814

  8. Isolation and purification of all-trans diadinoxanthin and all-trans diatoxanthin from diatom Phaeodactylum tricornutum.

    PubMed

    Kuczynska, Paulina; Jemiola-Rzeminska, Malgorzata

    2017-01-01

    Two diatom-specific carotenoids are engaged in the diadinoxanthin cycle, an important mechanism which protects these organisms against photoinhibition caused by absorption of excessive light energy. A high-performance and economical procedure of isolation and purification of diadinoxanthin and diatoxanthin from the marine diatom Phaeodactylum tricornutum using a four-step procedure has been developed. It is based on the use of commonly available materials and does not require advanced technology. Extraction of pigments, saponification, separation by partition and then open column chromatography, which comprise the complete experimental procedure, can be performed within 2 days. This method allows HPLC grade diadinoxanthin and diatoxanthin of a purity of 99 % or more to be obtained, and the efficiency was estimated to be 63 % for diadinoxanthin and 73 % for diatoxanthin. Carefully selected diatom culture conditions as well as analytical ones ensure highly reproducible performance. A protocol can be used to isolate and purify the diadinoxanthin cycle pigments both on analytical and preparative scale.

  9. Remote magnetic navigation for accurate, real-time catheter positioning and ablation in cardiac electrophysiology procedures.

    PubMed

    Filgueiras-Rama, David; Estrada, Alejandro; Shachar, Josh; Castrejón, Sergio; Doiny, David; Ortega, Marta; Gang, Eli; Merino, José L

    2013-04-21

    New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate.

  10. Remote Magnetic Navigation for Accurate, Real-time Catheter Positioning and Ablation in Cardiac Electrophysiology Procedures

    PubMed Central

    Filgueiras-Rama, David; Estrada, Alejandro; Shachar, Josh; Castrejón, Sergio; Doiny, David; Ortega, Marta; Gang, Eli; Merino, José L.

    2013-01-01

    New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate. PMID:23628883

  11. Culture Three Ways: Culture and Subcultures Within Countries.

    PubMed

    Oyserman, Daphna

    2017-01-03

    Culture can be thought of as a set of everyday practices and a core theme-individualism, collectivism, or honor-as well as the capacity to understand each of these themes. In one's own culture, it is easy to fail to see that a cultural lens exists and instead to think that there is no lens at all, only reality. Hence, studying culture requires stepping out of it. There are two main methods to do so: The first involves using between-group comparisons to highlight differences and the second involves using experimental methods to test the consequences of disruption to implicit cultural frames. These methods highlight three ways that culture organizes experience: (a) It shields reflexive processing by making everyday life feel predictable, (b) it scaffolds which cognitive procedure (connect, separate, or order) will be the default in ambiguous situations, and (c) it facilitates situation-specific accessibility of alternate cognitive procedures. Modern societal social-demographic trends reduce predictability and increase collectivism and honor-based go-to cognitive procedures.

  12. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  13. Direct Reconstruction of Two-Dimensional Currents in Thin Films from Magnetic-Field Measurements

    NASA Astrophysics Data System (ADS)

    Meltzer, Alexander Y.; Levin, Eitan; Zeldov, Eli

    2017-12-01

    An accurate determination of microscopic transport and magnetization currents is of central importance for the study of the electric properties of low-dimensional materials and interfaces, of superconducting thin films, and of electronic devices. Current distribution is usually derived from the measurement of the perpendicular component of the magnetic field above the surface of the sample, followed by numerical inversion of the Biot-Savart law. The inversion is commonly obtained by deriving the current stream function g , which is then differentiated in order to obtain the current distribution. However, this two-step procedure requires filtering at each step and, as a result, oversmooths the solution. To avoid this oversmoothing, we develop a direct procedure for inversion of the magnetic field that avoids use of the stream function. This approach provides enhanced accuracy of current reconstruction over a wide range of noise levels. We further introduce a reflection procedure that allows for the reconstruction of currents that cross the boundaries of the measurement window. The effectiveness of our approach is demonstrated by several numerical examples.

  14. Laser-induced thermal ablation of cancerous cell organelles.

    PubMed

    Letfullin, Renat R; Szatkowski, Scott A

    2017-07-01

    By exploiting the physical changes experienced by cancerous organelles, we investigate the feasibility of destroying cancerous cells by single and multipulse modes of laser heating. Our procedure consists of two primary steps: determining the normal and cancerous organelles optical properties and simulating the heating of all of the major organelles in the cell to find the treatment modes for the laser ablation of cancerous organelles without harming healthy cells. Our simulations show that the cancerous nucleus can be selectively heated to damaging temperatures, making this nucleus a feasible therapeutic particle and removing the need for nanoparticle injection. Because of the removal of this extra step, the procedure we propose is simpler and safer for the patient.

  15. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  16. Improving Preschool Teachers Attitude towards the Persona Doll Approach and Determining the Effectiveness of Persona Doll Training Procedures

    ERIC Educational Resources Information Center

    Acar, Ebru Aktan; Çetin, Hilal

    2017-01-01

    The study features two basic steps. The first step of the research aims to develop a scale to measure the attitude of preschool teachers towards the Persona Dolls Approach and to verify its validity/reliability through a general survey. The cohort employed in the research was drawn from a pool of preschool teachers working in and around the cities…

  17. A pancreas-preserving technique for the management of symptomatic pancreatic anastomotic insufficiency refractory to conservative treatment after pancreas head resection.

    PubMed

    Königsrainer, Ingmar; Zieker, Derek; Beckert, Stefan; Glatzle, Jörg; Schroeder, Torsten H; Heininger, Alexandra; Nadalin, Silvio; Königsrainer, Alfred

    2010-08-01

    Management of symptomatic pancreatic anastomotic insufficiency after pancreas head resection remains controversial. Completion pancreatectomy as one frequently performed option is associated with poor prognosis. During a 4-year period, a two-step strategy was applied in four consecutive patients suffering from pancreatic anastomotic insufficiency refractory to conservative management after a pancreas head resection. In the first step, sepsis was overbridged by meticulous debridement and resection of the pancreaticojejunostomy, leaving the biliary anastomosis untouched, and selective drainage of the pancreatic duct as well as the peripancreatic area. In the second step, after recovery, the procedure was completed with a novel pancreaticojejunostomy. The surgical procedure was completed in three patients after a mean of 164 (range: 112-213) days. One patient died from cardiac arrest 54 days after the reoperation with resolved abdominal sepsis. No pancreatic anastomotic insufficiency occurred after the new pancreaticojejunostomy had been performed. Three patients are alive and tumor-free with normal exocrine and endocrine pancreatic function after a mean follow-up of 20.3 (3-38) months following the definitive reconstruction. The two-step pancreas-preserving strategy can be used as an alternative to completion pancreatectomy for patients suffering from severe pancreatic anastomotic insufficiency.

  18. Designing Illustrations for CBVE Technical Procedures.

    ERIC Educational Resources Information Center

    Laugen, Ronald C.

    A model was formulated for developing functional illustrations for text-based competency-based vocational education (CBVE) instructional materials. The proposed model contained four prescriptive steps that address the events of instruction to be provided or supported and the locations, content, and learning cues for each illustration. Usefulness…

  19. [The physical and health status of runaway slaves announced in Jornal do Commercio (RJ) in 1850].

    PubMed

    Amantino, Márcia

    2007-01-01

    The article examines the state of health of a population of runaway slaves, based on announcements published in Rio de Janeiro's Jornal do Commercio in 1850. Two strategies were used. The first entailed analysis of the slaves' physical characteristics, as described by their masters. Taking into account the slave's health, the second step was to describe his or her physical problems as viewed by the era's medical or folk knowledge. This evidence can be traced to procedures found in the slave system, which sought to maximize use of captives.

  20. Quantitative metabolomics of the thermophilic methylotroph Bacillus methanolicus.

    PubMed

    Carnicer, Marc; Vieira, Gilles; Brautaset, Trygve; Portais, Jean-Charles; Heux, Stephanie

    2016-06-01

    The gram-positive bacterium Bacillus methanolicus MGA3 is a promising candidate for methanol-based biotechnologies. Accurate determination of intracellular metabolites is crucial for engineering this bacteria into an efficient microbial cell factory. Due to the diversity of chemical and cell properties, an experimental protocol validated on B. methanolicus is needed. Here a systematic evaluation of different techniques for establishing a reliable basis for metabolome investigations is presented. Metabolome analysis was focused on metabolites closely linked with B. methanolicus central methanol metabolism. As an alternative to cold solvent based procedures, a solvent-free quenching strategy using stainless steel beads cooled to -20 °C was assessed. The precision, the consistency of the measurements, and the extent of metabolite leakage from quenched cells were evaluated in procedures with and without cell separation. The most accurate and reliable performance was provided by the method without cell separation, as significant metabolite leakage occurred in the procedures based on fast filtration. As a biological test case, the best protocol was used to assess the metabolome of B. methanolicus grown in chemostat on methanol at two different growth rates and its validity was demonstrated. The presented protocol is a first and helpful step towards developing reliable metabolomics data for thermophilic methylotroph B. methanolicus. This will definitely help for designing an efficient methylotrophic cell factory.

  1. Isolation of plasmodesmata from Arabidopsis suspension culture cells.

    PubMed

    Grison, Magali S; Fernandez-Calvino, Lourdes; Mongrand, Sébastien; Bayer, Emmanuelle M F

    2015-01-01

    Due to their position firmly anchored within the plant cell wall, plasmodesmata (PD) are notoriously difficult to isolate from plant tissue. Yet, getting access to isolated PD represents the most straightforward strategy for the identification of their molecular components. Proteomic and lipidomic analyses of such PD fractions have provided and will continue to provide critical information on the functional and structural elements that define these membranous nano-pores. Here, we describe a two-step simple purification procedure that allows isolation of pure PD-derived membranes from Arabidopsis suspension cells. The first step of this procedure consists in isolating cell wall fragments containing intact PD while free of contamination from other cellular compartments. The second step relies on an enzymatic degradation of the wall matrix and the subsequent release of "free" PD. Isolated PD membranes provide a suitable starting material for the analysis of PD-associated proteins and lipids.

  2. Microtensile bond strength of eleven contemporary adhesives to enamel.

    PubMed

    Inoue, Satoshi; Vargas, Marcos A; Abe, Yasuhiko; Yoshida, Yasuhiro; Lambrechts, Paul; Vanherle, Guido; Sano, Hidehiko; Van Meerbeek, Bart

    2003-10-01

    To compare the microtensile bond strength (microTBS) to enamel of 10 contemporary adhesives, including three one-step self-etch systems, four two-step self-etch systems and three two-step total-etch systems, with that of a conventional three-step total-etch adhesive. Resin composite (Z100, 3M) was bonded to flat, #600-grit wet-sanded enamel surfaces of 18 extracted human third molars using the adhesives strictly according to the respective manufacturer's instructions. After storage overnight in 37 degrees C water, the bonded specimens were sectioned into 2-4 thin slabs of approximately 1 mm thickness and 2.5 mm width. They were then trimmed into an hourglass shape with an interface area of approximately 1 mm2, and subsequently subjected to microTBS-testing with a cross-head speed of 1 mm/minute. The microTBS to enamel varied from 3.2 MPa for the experimental one-step self-etch adhesive PQ/Universal (self-etch) to 43.9 MPa for the two-step total-etch adhesive Scotchbond 1. When compared with the conventional three-step total-etch adhesive OptiBond FL, the bond strengths of most adhesives with simplified application procedures were not significantly different, except for two one-step self-etch adhesives, experimental PQ/Universal (self-etch) and One-up Bond F, that showed lower bond strengths. Specimen failures during sample preparation were recorded for the latter adhesives as well.

  3. One-step selection of Vaccinia virus-binding DNA aptamers by MonoLEX

    PubMed Central

    Nitsche, Andreas; Kurth, Andreas; Dunkhorst, Anna; Pänke, Oliver; Sielaff, Hendrik; Junge, Wolfgang; Muth, Doreen; Scheller, Frieder; Stöcklein, Walter; Dahmen, Claudia; Pauli, Georg; Kage, Andreas

    2007-01-01

    Background As a new class of therapeutic and diagnostic reagents, more than fifteen years ago RNA and DNA aptamers were identified as binding molecules to numerous small compounds, proteins and rarely even to complete pathogen particles. Most aptamers were isolated from complex libraries of synthetic nucleic acids by a process termed SELEX based on several selection and amplification steps. Here we report the application of a new one-step selection method (MonoLEX) to acquire high-affinity DNA aptamers binding Vaccinia virus used as a model organism for complex target structures. Results The selection against complete Vaccinia virus particles resulted in a 64-base DNA aptamer specifically binding to orthopoxviruses as validated by dot blot analysis, Surface Plasmon Resonance, Fluorescence Correlation Spectroscopy and real-time PCR, following an aptamer blotting assay. The same oligonucleotide showed the ability to inhibit in vitro infection of Vaccinia virus and other orthopoxviruses in a concentration-dependent manner. Conclusion The MonoLEX method is a straightforward procedure as demonstrated here for the identification of a high-affinity DNA aptamer binding Vaccinia virus. MonoLEX comprises a single affinity chromatography step, followed by subsequent physical segmentation of the affinity resin and a single final PCR amplification step of bound aptamers. Therefore, this procedure improves the selection of high affinity aptamers by reducing the competition between aptamers of different affinities during the PCR step, indicating an advantage for the single-round MonoLEX method. PMID:17697378

  4. An Evaluator's Guide to Using DB MASTER: A Microcomputer Based File Management Program. Research on Evaluation Program, Paper and Report Series No. 91.

    ERIC Educational Resources Information Center

    Gray, Peter J.

    Ways a microcomputer can be used to establish and maintain an evaluation database and types of data management features possible on a microcomputer are described in this report, which contains step-by-step procedures and numerous examples for establishing a database, manipulating data, and designing and printing reports. Following a brief…

  5. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  6. Correction of photoresponse nonuniformity for matrix detectors based on prior compensation for their nonlinear behavior.

    PubMed

    Ferrero, Alejandro; Campos, Joaquin; Pons, Alicia

    2006-04-10

    What we believe to be a novel procedure to correct the nonuniformity that is inherent in all matrix detectors has been developed and experimentally validated. This correction method, unlike other nonuniformity-correction algorithms, consists of two steps that separate two of the usual problems that affect characterization of matrix detectors, i.e., nonlinearity and the relative variation of the pixels' responsivity across the array. The correction of the nonlinear behavior remains valid for any illumination wavelength employed, as long as the nonlinearity is not due to power dependence of the internal quantum efficiency. This method of correction of nonuniformity permits the immediate calculation of the correction factor for any given power level and for any illuminant that has a known spectral content once the nonuniform behavior has been characterized for a sufficient number of wavelengths. This procedure has a significant advantage compared with other traditional calibration-based methods, which require that a full characterization be carried out for each spectral distribution pattern of the incident optical radiation. The experimental application of this novel method has achieved a 20-fold increase in the uniformity of a CCD array for response levels close to saturation.

  7. Teaching adolescents with severe disabilities to use the public telephone.

    PubMed

    Test, D W; Spooner, F; Keul, P K; Grossi, T

    1990-04-01

    Two adolescents with severe disabilities served as participants in a study conducted to train in the use of the public telephone to call home. Participants were trained to complete a 17-step task analysis using a training package which consisted of total task presentation in conjunction with a four-level prompting procedure (i.e., independent, verbal, verbal + gesture, verbal + guidance). All instruction took place in a public setting (e.g., a shopping mall) with generalization probes taken in two alternative settings (e.g., a movie theater and a convenience store). A multiple probe across individuals design demonstrated the training package was successful in teaching participants to use the telephone to call home. In addition, newly acquired skills generalized to the two untrained settings. Implications for community-based training are discussed.

  8. Novel synthesis of [11C]GVG (Vigabatgrin) for pharmacokinetic studies of addiction treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Y.S.; Studenov, A.R.; Zhang, Z.

    2001-06-10

    We report here a novel synthetic route to prepare the precursor and to efficiently label GVG with C-11. 5-Bromo-3-(carbobenzyloxy)amino-1-pentene was synthesized in five steps from homoserine lactone. This was used in a two step radiosynthesis, displacement with [{sup 11}C]cyanide followed by acid hydrolysis to afford [{sup 11}C]GVG with high radiochemical yields (> 35%, not optimized) and high specific activity (2-5 Ci/{micro}mol). The [{sup 11}C]cyanide trapping was achieved at {minus}5 C with a mixture of Kryptofix and K{sub 2}CO{sub 3} without using conventional aqueous trapping procedure [7]. At this temperature, the excess NH{sub 3} from the target that may interfere withmore » the synthesis would not be trapped [8]. This procedure would be advantageous to any moisture sensitive radiosynthetic steps, as it was the case for our displacement reaction. When conventional aqueous trapping procedure was used, any trace amount of water left, even after prolonged heating, resulted in either no reaction or extremely low yields for the displacement reaction. The entire synthetic procedure should be extendible to the labeling of the pharmacologically active S- form of GVG when using S-homoserine lactone.« less

  9. Public Participation Procedure in Integrated Transport and Green Infrastructure Planning

    NASA Astrophysics Data System (ADS)

    Finka, Maroš; Ondrejička, Vladimír; Jamečný, Ľubomír; Husár, Milan

    2017-10-01

    The dialogue among the decision makers and stakeholders is a crucial part of any decision-making processes, particularly in case of integrated transportation planning and planning of green infrastructure where a multitude of actors is present. Although the theory of public participation is well-developed after several decades of research, there is still a lack of practical guidelines due to the specificity of public participation challenges. The paper presents a model of public participation for integrated transport and green infrastructure planning for international project TRANSGREEN covering the area of five European countries - Slovakia, Czech Republic, Austria, Hungary and Romania. The challenge of the project is to coordinate the efforts of public actors and NGOs in international environment in oftentimes precarious projects of transport infrastructure building and developing of green infrastructure. The project aims at developing and environmentally-friendly and safe international transport network. The proposed public participation procedure consists of five main steps - spread of information (passive), collection of information (consultation), intermediate discussion, engagement and partnership (empowerment). The initial spread of information is a process of communicating with the stakeholders, informing and educating them and it is based on their willingness to be informed. The methods used in this stage are public displays, newsletters or press releases. The second step of consultation is based on transacting the opinions of stakeholders to the decision makers. Pools, surveys, public hearings or written responses are examples of the multitude of ways to achieve this objective and the main principle of openness of stakeholders. The third step is intermediate discussion where all sides of are invited to a dialogue using the tools such as public meetings, workshops or urban walks. The fourth step is an engagement based on humble negotiation, arbitration and mediation. The collaborative skill needed here is dealing with conflicts. The final step in the procedure is partnership and empowerment employing methods as multi-actor decision making, voting or referenda. The leading principle is cooperation. In this ultimate step, the stakeholders are becoming decision makers themselves and the success factor here is continuous evaluation.

  10. Model-based surgical planning and simulation of cranial base surgery.

    PubMed

    Abe, M; Tabuchi, K; Goto, M; Uchino, A

    1998-11-01

    Plastic skull models of seven individual patients were fabricated by stereolithography from three-dimensional data based on computed tomography bone images. Skull models were utilized for neurosurgical planning and simulation in the seven patients with cranial base lesions that were difficult to remove. Surgical approaches and areas of craniotomy were evaluated using the fabricated skull models. In preoperative simulations, hand-made models of the tumors, major vessels and nerves were placed in the skull models. Step-by-step simulation of surgical procedures was performed using actual surgical tools. The advantages of using skull models to plan and simulate cranial base surgery include a better understanding of anatomic relationships, preoperative evaluation of the proposed procedure, increased understanding by the patient and family, and improved educational experiences for residents and other medical staff. The disadvantages of using skull models include the time and cost of making the models. The skull models provide a more realistic tool that is easier to handle than computer-graphic images. Surgical simulation using models facilitates difficult cranial base surgery and may help reduce surgical complications.

  11. Advanced Environmentally Resistant Lithium Fluoride for Next-Generation Broadband Observatories

    NASA Astrophysics Data System (ADS)

    Fleming, Brian

    2018-06-01

    Recent advances in the physical vapor deposition of protective fluoride films have raised the far ultraviolet (FUV: 912 – 1600 Angstrom) reflectivity of aluminum-based mirrors closer to the theoretical limit. The greatest gains have come for lithium fluoride protected aluminum, which has the shortest wavelength cutoff of any conventional overcoat. Despite the success of the NASA FUSE mission, the use of LiF-based optics is rare as LiF is hygroscopic and requires handling procedures that can drive risk. With NASA now studying two large mission concepts for astronomy (LUVOIR and HabEx) that mandate throughput down to 1000 Angstroms, the development of LiF-based coatings becomes crucial. We discuss the steps that are being taken to qualify these new enhanced LiF protected aluminum (eLiF) mirror coatings for flight. In addition to quantifying the hygroscopic degradation, we have developed a new method of protecting eLiF with an ultrathin capping layer of a non-hygroscopic material to increase durability. We report on the performance of eLiF-based optics and assess the steps that need to be taken to qualify such coatings for LUVOIR, HabEx, and other FUV-sensitive space missions.

  12. Proton irradiation of [18O]O2: production of [18F]F2 and [18F]F2 + [18F] OF2.

    PubMed

    Bishop, A; Satyamurthy, N; Bida, G; Hendry, G; Phelps, M; Barrio, J R

    1996-04-01

    The production of 18F electrophilic reagents via the 18O(p,n)18F reaction has been investigated in small-volume target bodies made of aluminum, copper, gold-plated copper and nickel, having straight or conical bore shapes. Three irradiation protocols-single-step, two-step and modified two-step-were used for the recovery of the 18F activity. The single-step irradiation protocol was tested in all the target bodies. Based on the single-step performance, aluminum targets were utilized extensively in the investigation of the two-step and modified two-step irradiation protocols. With an 11-MeV cyclotron and using the two-step irradiation protocol, > 1Ci [18F]F2 was recovered reproducibly from an aluminum target body. Probable radical mechanisms for the formation of OF2 and FONO2 (fluorine nitrate) in the single-step and modified two-step targets are proposed based on the amount of ozone generated and the nitrogen impurity present in the target gases, respectively.

  13. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  14. An Instruction Support System for Competency-Based Programs.

    ERIC Educational Resources Information Center

    Singh, Jane M.; And Others

    This report discusses the Pennsylvania State University Instruction Support System (ISS) designed to meet the needs of large classes for competency-based teacher education (CBTE) programs. The ISS seven-step hierarchical developmental procedure is reported to free the instructor for specialized instruction and evaluation by utilizing a…

  15. 24 CFR 92.351 - Affirmative marketing; minority outreach program.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... projects containing 5 or more HOME-assisted housing units. Affirmative marketing steps consist of actions... disability. (The affirmative marketing procedures do not apply to families with Section 8 tenant-based rental housing assistance or families with tenant-based rental assistance provided with HOME funds.) (2) The...

  16. 24 CFR 92.351 - Affirmative marketing; minority outreach program.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... projects containing 5 or more HOME-assisted housing units. Affirmative marketing steps consist of actions... disability. (The affirmative marketing procedures do not apply to families with Section 8 tenant-based rental housing assistance or families with tenant-based rental assistance provided with HOME funds.) (2) The...

  17. Green procedure with a green solvent for fats and oils' determination. Microwave-integrated Soxhlet using limonene followed by microwave Clevenger distillation.

    PubMed

    Virot, Matthieu; Tomao, Valérie; Ginies, Christian; Visinoni, Franco; Chemat, Farid

    2008-07-04

    Here is described a green and original alternative procedure for fats and oils' determination in oleaginous seeds. Extractions were carried out using a by-product of the citrus industry as extraction solvent, namely d-limonene, instead of hazardous petroleum solvents such as n-hexane. The described method is achieved in two steps using microwave energy: at first, extractions are attained using microwave-integrated Soxhlet, followed by the elimination of the solvent from the medium using a microwave Clevenger distillation in the second step. Oils extracted from olive seeds were compared with both conventional Soxhlet and microwave-integrated Soxhlet extraction procedures performed with n-hexane in terms of qualitative and quantitative determination. No significant difference was obtained between each extract allowing us to conclude that the proposed method is effective and valuable.

  18. A Delphi Consensus of the Crucial Steps in Gastric Bypass and Sleeve Gastrectomy Procedures in the Netherlands.

    PubMed

    Kaijser, Mirjam A; van Ramshorst, Gabrielle H; Emous, Marloes; Veeger, Nic J G M; van Wagensveld, Bart A; Pierie, Jean-Pierre E N

    2018-04-09

    Bariatric procedures are technically complex and skill demanding. In order to standardize the procedures for research and training, a Delphi analysis was performed to reach consensus on the practice of the laparoscopic gastric bypass and sleeve gastrectomy in the Netherlands. After a pre-round identifying all possible steps from literature and expert opinion within our study group, questionnaires were send to 68 registered Dutch bariatric surgeons, with 73 steps for bypass surgery and 51 steps for sleeve gastrectomy. Statistical analysis was performed to identify steps with and without consensus. This process was repeated to reach consensus of all necessary steps. Thirty-eight participants (56%) responded in the first round and 32 participants (47%) in the second round. After the first Delphi round, 19 steps for gastric bypass (26%) and 14 for sleeve gastrectomy (27%) gained full consensus. After the second round, an additional amount of 10 and 12 sub-steps was confirmed as key steps, respectively. Thirteen steps in the gastric bypass and seven in the gastric sleeve were deemed advisable. Our expert panel showed a high level of consensus expressed in a Cronbach's alpha of 0.82 for the gastric bypass and 0.87 for the sleeve gastrectomy. The Delphi consensus defined 29 steps for gastric bypass and 26 for sleeve gastrectomy as being crucial for correct performance of these procedures to the standards of our expert panel. These results offer a clear framework for the technical execution of these procedures.

  19. Investigation to biodiesel production by the two-step homogeneous base-catalyzed transesterification.

    PubMed

    Ye, Jianchu; Tu, Song; Sha, Yong

    2010-10-01

    For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    PubMed Central

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463

  1. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.

    PubMed

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan

    2010-01-01

    Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  2. One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2018-01-19

    Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less

  3. One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabitto, Jorge; Tsouris, Costas

    Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less

  4. Learn, see, practice, prove, do, maintain: an evidence-based pedagogical framework for procedural skill training in medicine.

    PubMed

    Sawyer, Taylor; White, Marjorie; Zaveri, Pavan; Chang, Todd; Ades, Anne; French, Heather; Anderson, JoDee; Auerbach, Marc; Johnston, Lindsay; Kessler, David

    2015-08-01

    Acquisition of competency in procedural skills is a fundamental goal of medical training. In this Perspective, the authors propose an evidence-based pedagogical framework for procedural skill training. The framework was developed based on a review of the literature using a critical synthesis approach and builds on earlier models of procedural skill training in medicine. The authors begin by describing the fundamentals of procedural skill development. Then, a six-step pedagogical framework for procedural skills training is presented: Learn, See, Practice, Prove, Do, and Maintain. In this framework, procedural skill training begins with the learner acquiring requisite cognitive knowledge through didactic education (Learn) and observation of the procedure (See). The learner then progresses to the stage of psychomotor skill acquisition and is allowed to deliberately practice the procedure on a simulator (Practice). Simulation-based mastery learning is employed to allow the trainee to prove competency prior to performing the procedure on a patient (Prove). Once competency is demonstrated on a simulator, the trainee is allowed to perform the procedure on patients with direct supervision, until he or she can be entrusted to perform the procedure independently (Do). Maintenance of the skill is ensured through continued clinical practice, supplemented by simulation-based training as needed (Maintain). Evidence in support of each component of the framework is presented. Implementation of the proposed framework presents a paradigm shift in procedural skill training. However, the authors believe that adoption of the framework will improve procedural skill training and patient safety.

  5. Reliability of sensor-based real-time workflow recognition in laparoscopic cholecystectomy.

    PubMed

    Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Reiser, Silvano; Vogel, Thomas; Wilhelm, Dirk; Feussner, Hubertus

    2014-11-01

    Laparoscopic cholecystectomy is a very common minimally invasive surgical procedure that may be improved by autonomous or cooperative assistance support systems. Model-based surgery with a precise definition of distinct procedural tasks (PT) of the operation was implemented and tested to depict and analyze the process of this procedure. Reliability of real-time workflow recognition in laparoscopic cholecystectomy ([Formula: see text] cases) was evaluated by continuous sensor-based data acquisition. Ten PTs were defined including begin/end preparation calots' triangle, clipping/cutting cystic artery and duct, begin/end gallbladder dissection, begin/end hemostasis, gallbladder removal, and end of operation. Data acquisition was achieved with continuous instrument detection, room/table light status, intra-abdominal pressure, table tilt, irrigation/aspiration volume and coagulation/cutting current application. Two independent observers recorded start and endpoint of each step by analysis of the sensor data. The data were cross-checked with laparoscopic video recordings serving as gold standard for PT identification. Bland-Altman analysis revealed for 95% of cases a difference of annotation results within the limits of agreement ranging from [Formula: see text]309 s (PT 7) to +368 s (PT 5). Laparoscopic video and sensor data matched to a greater or lesser extent within the different procedural tasks. In the majority of cases, the observer results exceeded those obtained from the laparoscopic video. Empirical knowledge was required to detect phase transit. A set of sensors used to monitor laparoscopic cholecystectomy procedures was sufficient to enable expert observers to reliably identify each PT. In the future, computer systems may automate the task identification process provided a more robust data inflow is available.

  6. The selection of adhesive systems for resin-based luting agents.

    PubMed

    Carville, Rebecca; Quinn, Frank

    2008-01-01

    The use of resin-based luting agents is ever expanding with the development of adhesive dentistry. A multitude of different adhesive systems are used with resin-based luting agents, and new products are introduced to the market frequently. Traditional adhesives generally required a multiple step bonding procedure prior to cementing with active resin-based luting materials; however, combined agents offer a simple application procedure. Self-etching 'all-in-one' systems claim that there is no need for the use of a separate adhesive process. The following review addresses the advantages and disadvantages of the available adhesive systems used with resin-based luting agents.

  7. Investigations for Thermal and Electrical Conductivity of ABS-Graphene Blended Prototypes

    PubMed Central

    Singh, Rupinder; Sandhu, Gurleen S.; Penna, Rosa; Farina, Ilenia

    2017-01-01

    The thermoplastic materials such as acrylonitrile-butadiene-styrene (ABS) and Nylon have large applications in three-dimensional printing of functional/non-functional prototypes. Usually these polymer-based prototypes are lacking in thermal and electrical conductivity. Graphene (Gr) has attracted impressive enthusiasm in the recent past due to its natural mechanical, thermal, and electrical properties. This paper presents the step by step procedure (as a case study) for development of an in-house ABS-Gr blended composite feedstock filament for fused deposition modelling (FDM) applications. The feedstock filament has been prepared by two different methods (mechanical and chemical mixing). For mechanical mixing, a twin screw extrusion (TSE) process has been used, and for chemical mixing, the composite of Gr in an ABS matrix has been set by chemical dissolution, followed by mechanical blending through TSE. Finally, the electrical and thermal conductivity of functional prototypes prepared from composite feedstock filaments have been optimized. PMID:28773244

  8. Using cognitive task analysis to create a teaching protocol for bovine dystocia.

    PubMed

    Read, Emma K; Baillie, Sarah

    2013-01-01

    When learning skilled techniques and procedures, students face many challenges. Learning is easier when detailed instructions are available, but experts often find it difficult to articulate all of the steps involved in a task or relate to the learner as a novice. This problem is further compounded when the technique is internal and unsighted (e.g., obstetrical procedures). Using expert bovine practitioners and a life-size model cow and calf, the steps and decision making involved in performing correction of two different dystocia presentations (anterior leg back and breech) were deconstructed using cognitive task analysis (CTA). Video cameras were positioned to capture movement inside and outside the cow model while the experts were asked to first perform the technique as they would in a real situation and then perform the procedure again as if articulating the steps to a novice learner. The audio segments were transcribed and, together with the video components, analyzed to create a list of steps for each expert. Consensus was achieved between experts during individual interviews followed by a group discussion. A "gold standard" list or teaching protocol was created for each malpresentation. CTA was useful in defining the technical and cognitive steps required to both perform and teach the tasks effectively. Differences between experts highlight the need for consensus before teaching the skill. In addition, the study identified several different, yet effective, techniques and provided information that could allow experts to consider other approaches they might use when their own technique fails.

  9. Identifying parameter regions for multistationarity

    PubMed Central

    Conradi, Carsten; Mincheva, Maya; Wiuf, Carsten

    2017-01-01

    Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. PMID:28972969

  10. Translation and cultural adaptation of the Shame and Stigma Scale (SSS) into Portuguese (Brazil) to evaluate patients with head and neck cancer.

    PubMed

    Pirola, William Eduardo; Paiva, Bianca Sakamoto Ribeiro; Barroso, Eliane Marçon; Kissane, David W; Serrano, Claudia Valéria Maseti Pimenta; Paiva, Carlos Eduardo

    Head and neck cancer is the sixth leading cause of death from cancer worldwide and its treatment may involve surgery, chemotherapy and/or radiation therapy. The surgical procedure may cause mutilating sequelae, that can alter patient self-image. Thus, head and neck cancer is often connected to the negative stigma with decreased quality of life. Few studies assess the social stigma and shame perceived by patients with head and neck cancer. To perform the translation and cultural adaptation of the Shame and Stigma Scale (SSS) into Portuguese (Brazil). Two independent translations (English into Portuguese) were carried out by two professionals fluent in the English language. After the synthesis of the translations, two independent back-translations (from Portuguese into English) were performed by two translators whose native language is English. All translations were critically assessed by a committee of experts consisting of five members. A sample of 15 patients answered the Brazilian Portuguese version of the SSS to carry out the pretest. At this step, the patients were able to suggest modifications and evaluate the understanding of the items. There was no need to change the scale after this step. Based on the previous steps, we obtained the Portuguese (Brazil) version of the SSS, which was called "Escala de Vergonha e Estigma". The Portuguese (Brazil) version of the SSP was shown to be adequate to be applied to the population with HNC and, therefore, the psychometric properties of the tool will be evaluated during following steps. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  11. Bimetallic iron and cobalt incorporated MFI/MCM-41 composite and its catalytic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Baoshan, E-mail: bsli@mail.buct.edu.cn; Xu, Junqing; Li, Xiao

    2012-05-15

    Graphical abstract: The formation of FeCo-MFI/MCM-41 composite is based on two steps, the first step of synthesizing the MFI-type proto-zeolite unites under hydrothermal conditions. The second step of assembling these zeolite fragment together new silica and heteroatom source on the CTAB surfactant micelle to synthesize the mesoporous product with hexagonal structure. Highlights: Black-Right-Pointing-Pointer Bimetallic iron and cobalt incorporated MFI/MCM-41 composite was prepared using templating method. Black-Right-Pointing-Pointer FeCo-MFI/MCM-41 composite simultaneously possessed two kinds of meso- and micro-porous structures. Black-Right-Pointing-Pointer Iron and cobalt ions incorporated into the silica framework with tetrahedral coordination. -- Abstract: The MFI/MCM-41 composite material with bimetallic Fe andmore » Co incorporation was prepared using templating method via a two-step hydrothermal crystallization procedure. The obtained products were characterized by a series of techniques including powder X-ray diffraction, N{sub 2} sorption, transmission electron microscopy, scanning electron microscope, H{sub 2} temperature programmed reduction, thermal analyses, and X-ray absorption fine structure spectroscopy of the Fe and Co K-edge. The catalytic properties of the products were investigated by residual oil hydrocracking reactions. Characterization results showed that the FeCo-MFI/MCM-41 composite simultaneously possessed two kinds of stable meso- and micro-porous structures. Iron and cobalt ions were incorporated into the silicon framework, which was confirmed by H{sub 2} temperature programmed reduction and X-ray absorption fine structure spectroscopy. This composite presented excellent activities in hydrocracking of residual oil, which was superior to the pure materials of silicate-1/MCM-41.« less

  12. Investigation of solvent-free MALDI-TOFMS sample preparation methods for the analysis of organometallic and coordination compounds.

    PubMed

    Hughes, Laura; Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth

    2009-01-15

    An investigation of various solvent-free matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) sample preparation methods for the characterization of organometallic and coordination compounds is described. Such methods are desirable for insoluble materials, compounds that are only soluble in disadvantageous solvents, or complexes that dissociate in solution, all of which present a major "difficulty" to most mass spectrometry techniques. First-row transition metal acetylacetonate complexes, which have been characterized previously by solution preparation MALDI-TOFMS, were used to evaluate the various solvent-free procedures. These procedures comprise two distinct steps: the first being the efficient "solids mixing" (the mixing of sample and matrix), and the second being the effective transfer of the sample/matrix mixture to the MALDI target plate. This investigation shows that vortex mixing is the most efficient first step and that smearing using a microspatula is the most effective second step. In addition, the second step is shown to be much more critical than the first step in obtaining high-quality data. Case studies of truly insoluble materials highlight the importance of these techniques for the wider chemistry community.

  13. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  14. A study on the effect of varying sequence of lab performance skills on lab performance of high school physics students

    NASA Astrophysics Data System (ADS)

    Bournia-Petrou, Ethel A.

    The main goal of this investigation was to study how student rank in class, student gender and skill sequence affect high school students' performance on the lab skills involved in a laboratory-based inquiry task in physics. The focus of the investigation was the effect of skill sequence as determined by the particular task. The skills considered were: Hypothesis, Procedure, Planning, Data, Graph, Calculations and Conclusion. Three physics lab tasks based on the simple pendulum concept were administered to 282 Regents physics high school students. The reliability of the designed tasks was high. Student performance was evaluated on individual student written responses and a scoring rubric. The tasks had high discrimination power and were of moderate difficulty (65%). It was found that, student performance was weak on Conclusion (42%), Hypothesis (48%), and Procedure (51%), where the numbers in parentheses represent the mean as a percentage of the maximum possible score. Student performance was strong on Calculations (91%), Data (82%), Graph (74%) and Plan (68%). Out of all seven skills, Procedure had the strongest correlation (.73) with the overall task performance. Correlation analysis revealed some strong relationships among the seven skills which were grouped in two distinct clusters: Hypothesis, Procedure and Plan belong to one, and Data, Graph, Calculations, and Conclusion belong to the other. This distinction may indicate different mental processes at play within each skill cluster. The effect of student rank was not statistically significant according to the MANOVA results due to the large variation of rank levels among the participating schools. The effect of gender was significant on the entire test because of performance differences on Calculations and Graph, where male students performed better than female students. Skill sequence had a significant effect on the skills of Procedure, Plan, Data and Conclusion. Students are rather weak in proposing a sensible, detailed procedure for the inquiry task which involves the "novel" concept. However they perform better on Procedure and Plan, if the "novel" task is not preceded by another, which explicitly offers step-by-step procedure instructions. It was concluded that the format of detailed, structured instructions often adopted by many commercial and school-developed lab books and conventional lab practices, fails to prepare students to propose a successful, detailed procedure when faced with a slightly "novel", lab-based inquiry task. Student performance on Data collection was higher in the tasks that involved the more familiar experimental arrangement than in the tasks using the slightly "novel" equipment. Student performance on Conclusion was better in tasks where they had to collect the Data themselves than in tasks, where all relevant Data information was given to them.

  15. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  16. Genetic Interaction Mapping in Schizosaccharomyces pombe Using the Pombe Epistasis Mapper (PEM) System and a ROTOR HDA Colony Replicating Robot in a 1536 Array Format.

    PubMed

    Roguev, Assen; Xu, Jiewei; Krogan, Nevan

    2018-02-01

    This protocol describes an optimized high-throughput procedure for generating double deletion mutants in Schizosaccharomyces pombe using the colony replicating robot ROTOR HDA and the PEM (pombe epistasis mapper) system. The method is based on generating high-density colony arrays (1536 colonies per agar plate) and passaging them through a series of antidiploid and mating-type selection (ADS-MTS) and double-mutant selection (DMS) steps. Detailed program parameters for each individual replication step are provided. Using this procedure, batches of 25 or more screens can be routinely performed. © 2018 Cold Spring Harbor Laboratory Press.

  17. Highly Efficient Production of Soluble Proteins from Insoluble Inclusion Bodies by a Two-Step-Denaturing and Refolding Method

    PubMed Central

    Zhang, Yan; Zhang, Ting; Feng, Yanye; Lu, Xiuxiu; Lan, Wenxian; Wang, Jufang; Wu, Houming; Cao, Chunyang; Wang, Xiaoning

    2011-01-01

    The production of recombinant proteins in a large scale is important for protein functional and structural studies, particularly by using Escherichia coli over-expression systems; however, approximate 70% of recombinant proteins are over-expressed as insoluble inclusion bodies. Here we presented an efficient method for generating soluble proteins from inclusion bodies by using two steps of denaturation and one step of refolding. We first demonstrated the advantages of this method over a conventional procedure with one denaturation step and one refolding step using three proteins with different folding properties. The refolded proteins were found to be active using in vitro tests and a bioassay. We then tested the general applicability of this method by analyzing 88 proteins from human and other organisms, all of which were expressed as inclusion bodies. We found that about 76% of these proteins were refolded with an average of >75% yield of soluble proteins. This “two-step-denaturing and refolding” (2DR) method is simple, highly efficient and generally applicable; it can be utilized to obtain active recombinant proteins for both basic research and industrial purposes. PMID:21829569

  18. A Cooperative Traffic Control of Vehicle–Intersection (CTCVI) for the Reduction of Traffic Delays and Fuel Consumption

    PubMed Central

    Li, Jinjian; Dridi, Mahjoub; El-Moudni, Abdellah

    2016-01-01

    The problem of reducing traffic delays and decreasing fuel consumption simultaneously in a network of intersections without traffic lights is solved by a cooperative traffic control algorithm, where the cooperation is executed based on the connection of Vehicle-to-Infrastructure (V2I). This resolution of the problem contains two main steps. The first step concerns the itinerary of which intersections are chosen by vehicles to arrive at their destination from their starting point. Based on the principle of minimal travel distance, each vehicle chooses its itinerary dynamically based on the traffic loads in the adjacent intersections. The second step is related to the following proposed cooperative procedures to allow vehicles to pass through each intersection rapidly and economically: on one hand, according to the real-time information sent by vehicles via V2I in the edge of the communication zone, each intersection applies Dynamic Programming (DP) to cooperatively optimize the vehicle passing sequence with minimal traffic delays so that the vehicles may rapidly pass the intersection under the relevant safety constraints; on the other hand, after receiving this sequence, each vehicle finds the optimal speed profiles with the minimal fuel consumption by an exhaustive search. The simulation results reveal that the proposed algorithm can significantly reduce both travel delays and fuel consumption compared with other papers under different traffic volumes. PMID:27999333

  19. Preparation of visible-light-activated metal complexes and their use in photoredox/nickel dual catalysis.

    PubMed

    Kelly, Christopher B; Patel, Niki R; Primer, David N; Jouffroy, Matthieu; Tellis, John C; Molander, Gary A

    2017-03-01

    Visible-light-activated photoredox catalysts provide synthetic chemists with the unprecedented capability to harness reactive radicals through discrete, single-electron transfer (SET) events. This protocol describes the synthesis of two transition metal complexes, [Ir{dF(CF 3 ) 2 ppy} 2 (bpy)]PF 6 (1a) and [Ru(bpy) 3 ](PF 6 ) 2 (2a), that are activated by visible light. These photoredox catalysts are SET agents that can be used to facilitate transformations ranging from proton-coupled electron-transfer-mediated cyclizations to C-C bond constructions, dehalogenations, and H-atom abstractions. These photocatalysts have been used in the synthesis of medicinally relevant compounds for drug discovery, as well as the degradation of biological polymers to access fine chemicals. These catalysts are prepared from IrCl 3 and RuCl 3 , respectively, in three chemical steps. These steps can be described as a series of two ligand modifications followed by an anion metathesis. Using the cost-effective, scalable procedures described here, the ruthenium-based photocatalyst 2a can be synthesized in a 78% overall yield (∼8.1 g), and the iridium-based photocatalyst 1a can be prepared in a 56% overall yield (∼4.4 g). The total time necessary for the complete protocols ranges from ∼2 d for 2a to 5-7 d for 1a. Procedures for applying each catalyst in representative photoredox/Ni cross-coupling to form C sp 3-C sp 2 bonds using the appropriate radical precursor-organotrifluoroborates with 1a and bis(catecholato)alkylsilicates with 2a-are described. In addition, more traditional photoredox-mediated transformations are included as diagnostic tests for catalytic activity.

  20. Sliding to predict: vision-based beating heart motion estimation by modeling temporal interactions.

    PubMed

    Aviles-Rivero, Angelica I; Alsaleh, Samar M; Casals, Alicia

    2018-03-01

    Technical advancements have been part of modern medical solutions as they promote better surgical alternatives that serve to the benefit of patients. Particularly with cardiovascular surgeries, robotic surgical systems enable surgeons to perform delicate procedures on a beating heart, avoiding the complications of cardiac arrest. This advantage comes with the price of having to deal with a dynamic target which presents technical challenges for the surgical system. In this work, we propose a solution for cardiac motion estimation. Our estimation approach uses a variational framework that guarantees preservation of the complex anatomy of the heart. An advantage of our approach is that it takes into account different disturbances, such as specular reflections and occlusion events. This is achieved by performing a preprocessing step that eliminates the specular highlights and a predicting step, based on a conditional restricted Boltzmann machine, that recovers missing information caused by partial occlusions. We carried out exhaustive experimentations on two datasets, one from a phantom and the other from an in vivo procedure. The results show that our visual approach reaches an average minima in the order of magnitude of [Formula: see text] while preserving the heart's anatomical structure and providing stable values for the Jacobian determinant ranging from 0.917 to 1.015. We also show that our specular elimination approach reaches an accuracy of 99% compared to a ground truth. In terms of prediction, our approach compared favorably against two well-known predictors, NARX and EKF, giving the lowest average RMSE of 0.071. Our approach avoids the risks of using mechanical stabilizers and can also be effective for acquiring the motion of organs other than the heart, such as the lung or other deformable objects.

  1. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  2. Direct optical detection of protein-ligand interactions.

    PubMed

    Gesellchen, Frank; Zimmermann, Bastian; Herberg, Friedrich W

    2005-01-01

    Direct optical detection provides an excellent means to investigate interactions of molecules in biological systems. The dynamic equilibria inherent to these systems can be described in greater detail by recording the kinetics of a biomolecular interaction. Optical biosensors allow direct detection of interaction patterns without the need for labeling. An overview covering several commercially available biosensors is given, with a focus on instruments based on surface plasmon resonance (SPR) and reflectometric interference spectroscopy (RIFS). Potential assay formats and experimental design, appropriate controls, and calibration procedures, especially when handling low molecular weight substances, are discussed. The single steps of an interaction analysis combined with practical tips for evaluation, data processing, and interpretation of kinetic data are described in detail. In a practical example, a step-by-step procedure for the analysis of a low molecular weight compound interaction with serum protein, determined on a commercial SPR sensor, is presented.

  3. European type-approval test procedure for evaporative emissions from passenger cars against real-world mobility data from two Italian provinces.

    PubMed

    Martini, Giorgio; Paffumi, Elena; De Gennaro, Michele; Mellios, Giorgos

    2014-07-15

    This paper presents an evaluation of the European type-approval test procedure for evaporative emissions from passenger cars based on real-world mobility data. The study relies on two large databases of driving patterns from conventional fuel vehicles collected by means of on-board GPS systems in the Italian provinces of Modena and Firenze. Approximately 28,000 vehicles were monitored, corresponding to approximately 36 million kilometres over a period of one month. The driving pattern of each vehicle was processed to derive the relation between trip length and parking duration, and the rate of occurrence of parking events against multiple evaporative cycles, defined on the basis of the type-approval test procedure as 12-hour diurnal time windows. These results are used as input for an emission simulation model, which calculates the total evaporative emissions given the characteristics of the evaporative emission control system of the vehicle and the ambient temperature conditions. The results suggest that the evaporative emission control system, fitted to the vehicles from Euro 3 step and optimised for the current type-approval test procedure, could not efficiently work under real-world conditions, resulting in evaporative emissions well above the type-approval limit, especially for small size vehicles and warm climate conditions. This calls for a revision of the type-approval test procedure in order to address real-world evaporative emissions. Copyright © 2014. Published by Elsevier B.V.

  4. Short bowel mucosal morphology, proliferation and inflammation at first and repeat STEP procedures.

    PubMed

    Mutanen, Annika; Barrett, Meredith; Feng, Yongjia; Lohi, Jouko; Rabah, Raja; Teitelbaum, Daniel H; Pakarinen, Mikko P

    2018-04-17

    Although serial transverse enteroplasty (STEP) improves function of dilated short bowel, a significant proportion of patients require repeat surgery. To address underlying reasons for unsuccessful STEP, we compared small intestinal mucosal characteristics between initial and repeat STEP procedures in children with short bowel syndrome (SBS). Fifteen SBS children, who underwent 13 first and 7 repeat STEP procedures with full thickness small bowel samples at median age 1.5 years (IQR 0.7-3.7) were included. The specimens were analyzed histologically for mucosal morphology, inflammation and muscular thickness. Mucosal proliferation and apoptosis was analyzed with MIB1 and Tunel immunohistochemistry. Median small bowel length increased 42% by initial STEP and 13% by repeat STEP (p=0.05), while enteral caloric intake increased from 6% to 36% (p=0.07) during 14 (12-42) months between the procedures. Abnormal mucosal inflammation was frequently observed both at initial (69%) and additional STEP (86%, p=0.52) surgery. Villus height, crypt depth, enterocyte proliferation and apoptosis as well as muscular thickness were comparable at first and repeat STEP (p>0.05 for all). Patients, who required repeat STEP tended to be younger (p=0.057) with less apoptotic crypt cells (p=0.031) at first STEP. Absence of ileocecal valve associated with increased intraepithelial leukocyte count and reduced crypt cell proliferation index (p<0.05 for both). No adaptive mucosal hyperplasia or muscular alterations occurred between first and repeat STEP. Persistent inflammation and lacking mucosal growth may contribute to continuing bowel dysfunction in SBS children, who require repeat STEP procedure, especially after removal of the ileocecal valve. Level IV, retrospective study. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Rapid in vitro labeling procedures for two-dimensional gel fingerprinting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y.F.; Fowlks, E.R.

    1982-01-15

    Improvements of existing in vitro procedures for labeling RNA radioactively, and modifications of the two-dimensional polyacrylamide gel electrophoresis system for making RNA fingerprints are described. These improvements are (a) inactivation of phosphatase with nitric acid at pH 2.0 eliminated the phenol-cholorform extraction step during 5'-end labeling with polynucleotide kinase and (..gamma..-/sup 32/P)ATP; (b) ZnSO/sub 4/ inactivation of R Nase T/sub 1/ results in a highly efficient procedure for 3'-end labeling with T4 ligase and (5'-/sup 32/P)pCp; and (c) a rapid 4-min procedure for variable quantity range of /sup 125/I and RNA results in a qualitative and quantitative sample for high-molecularmore » weight RNA fingerprinting. Thus, these in vitro procedures become rapid and reproducible when combined with two-dimensional gel electrophoresis which eliminates simultaneously labeled impurities. Each labeling procedure is compared, using tobacco mosaic virus, Brome mosaic virus, and polio RNA. A series of Ap-rich oligonucleotides was discovered in the inner genome of Brome mosaic Virus RNA-3.« less

  6. Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment

    PubMed Central

    Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.

    2016-01-01

    Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melo, Ronaldo P. de; Colégio Militar do Recife, Exército Brasileiro, Recife PE 50730-120; Oliveira, Nathalia Talita C.

    A novel procedure based on a two-step method was developed to obtain β-Ga{sub 2}O{sub 3} nanowires by the chemical vapor deposition (CVD) method. The first step consists in the gallium micro-spheres growth inside a metal-organic chemical vapor deposition environment, using an organometallic precursor. Nanoscale spheres covering the microspheres were obtained. The second step involves the CVD oxidization of the gallium micro-spheres, which allow the formation of β-Ga{sub 2}O{sub 3} nanowires on the micro-sphere surface, with the final result being a nanostructure mimicking nature's sea urchin morphology. The grown nanomaterial is characterized by several techniques, including X-ray diffraction, scanning electron microscopy,more » energy-dispersive X-ray, transmission electron microscopy, and photoluminescence. A discussion about the growth mechanism and the optical properties of the β-Ga{sub 2}O{sub 3} material is presented considering its unknown true bandgap value (extending from 4.4 to 5.68 eV). As an application, the scattering properties of the nanomaterial are exploited to demonstrate random laser emission (around 570 nm) when it is permeated with a laser dye liquid solution.« less

  8. Teaching People and Machines to Enhance Images

    NASA Astrophysics Data System (ADS)

    Berthouzoz, Floraine Sara Martianne

    Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs provide basic macro authoring tools that allow users to record and then replay a sequence of operations, these macros are very brittle and cannot adapt to new images. We present a more comprehensive approach for generating content-adaptive macros that can automatically transfer operations to new target images. To create these macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to to adapt the parameters of each operation to the new image content. We show that our framework is able to learn a large class of the most commonly-used manipulations using as few as 20 training demonstrations. Our content-adaptive macros allow users to transfer photo manipulation procedures with a single button click and thereby significantly simplify repetitive procedures.

  9. Prevalidation in pharmaceutical analysis. Part I. Fundamentals and critical discussion.

    PubMed

    Grdinić, Vladimir; Vuković, Jadranka

    2004-05-28

    A complete prevalidation, as a basic prevalidation strategy for quality control and standardization of analytical procedure was inaugurated. Fast and simple, the prevalidation methodology based on mathematical/statistical evaluation of a reduced number of experiments (N < or = 24) was elaborated and guidelines as well as algorithms were given in detail. This strategy has been produced for the pharmaceutical applications and dedicated to the preliminary evaluation of analytical methods where linear calibration model, which is very often occurred in practice, could be the most appropriate to fit experimental data. The requirements presented in this paper should therefore help the analyst to design and perform the minimum number of prevalidation experiments needed to obtain all the required information to evaluate and demonstrate the reliability of its analytical procedure. In complete prevalidation process, characterization of analytical groups, checking of two limiting groups, testing of data homogeneity, establishment of analytical functions, recognition of outliers, evaluation of limiting values and extraction of prevalidation parameters were included. Moreover, system of diagnosis for particular prevalidation step was suggested. As an illustrative example for demonstration of feasibility of prevalidation methodology, among great number of analytical procedures, Vis-spectrophotometric procedure for determination of tannins with Folin-Ciocalteu's phenol reagent was selected. Favourable metrological characteristics of this analytical procedure, as prevalidation figures of merit, recognized the metrological procedure as a valuable concept in preliminary evaluation of quality of analytical procedures.

  10. Procedures for the GMP-Compliant Production and Quality Control of [18F]PSMA-1007: A Next Generation Radiofluorinated Tracer for the Detection of Prostate Cancer

    PubMed Central

    Cardinale, Jens; Martin, René; Remde, Yvonne; Schäfer, Martin; Hienzsch, Antje; Hübner, Sandra; Zerges, Anna-Maria; Marx, Heike; Hesse, Ronny; Weber, Klaus; Smits, Rene; Hoepping, Alexander; Müller, Marco; Neels, Oliver C.; Kopka, Klaus

    2017-01-01

    Radiolabeled tracers targeting the prostate-specific membrane antigen (PSMA) have become important radiopharmaceuticals for the PET-imaging of prostate cancer. In this connection, we recently developed the fluorine-18-labelled PSMA-ligand [18F]PSMA-1007 as the next generation radiofluorinated Glu-ureido PSMA inhibitor after [18F]DCFPyL and [18F]DCFBC. Since radiosynthesis so far has been suffering from rather poor yields, novel procedures for the automated radiosyntheses of [18F]PSMA-1007 have been developed. We herein report on both the two-step and the novel one-step procedures, which have been performed on different commonly-used radiosynthesisers. Using the novel one-step procedure, the [18F]PSMA-1007 was produced in good radiochemical yields ranging from 25 to 80% and synthesis times of less than 55 min. Furthermore, upscaling to product activities up to 50 GBq per batch was successfully conducted. All batches passed quality control according to European Pharmacopoeia standards. Therefore, we were able to disclose a new, simple and, at the same time, high yielding production pathway for the next generation PSMA radioligand [18F]PSMA-1007. Actually, it turned out that the radiosynthesis is as easily realised as the well-known [18F]FDG synthesis and, thus, transferable to all currently-available radiosynthesisers. Using the new procedures, the clinical daily routine can be sustainably supported in-house even in larger hospitals by a single production batch. PMID:28953234

  11. Investigations into Alternative Desorption Agents for Amidoxime-Based Polymeric Uranium Adsorbents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, Gary A.; Kuo, Li-Jung; Strivens, Jonathan E.

    2015-06-01

    Amidoxime-based polymeric braid adsorbents that can extract uranium (U) from seawater are being developed to provide a sustainable supply of fuel for nuclear reactors. A critical step in the development of the technology is to develop elution procedures to selectively remove U from the adsorbents and to do so in a manner that allows the adsorbent material to be reused. This study investigates use of high concentrations of bicarbonate along with targeted chelating agents as an alternative means to the mild acid elution procedures currently in use for selectively eluting uranium from amidoxime-based polymeric adsorbents.

  12. Segmentation of bone and soft tissue regions in digital radiographic images of extremities

    NASA Astrophysics Data System (ADS)

    Pakin, S. Kubilay; Gaborski, Roger S.; Barski, Lori L.; Foos, David H.; Parker, Kevin J.

    2001-07-01

    This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.

  13. A LiDAR based analysis of hydraulic hazard mapping

    NASA Astrophysics Data System (ADS)

    Cazorzi, F.; De Luca, A.; Checchinato, A.; Segna, F.; Dalla Fontana, G.

    2012-04-01

    Mapping hydraulic hazard is a ticklish procedure as it involves technical and socio-economic aspects. On the one hand no dangerous areas should be excluded, on the other hand it is important not to exceed, beyond the necessary, with the surface assigned to some use limitations. The availability of a high resolution topographic survey allows nowadays to face this task with innovative procedures, both in the planning (mapping) and in the map validation phases. The latter is the object of the present work. It should be stressed that the described procedure is proposed purely as a preliminary analysis based on topography only, and therefore does not intend in any way to replace more sophisticated analysis methods requiring based on hydraulic modelling. The reference elevation model is a combination of the digital terrain model and the digital building model (DTM+DBM). The option of using the standard surface model (DSM) is not viable, as the DSM represents the vegetation canopy as a solid volume. This has the consequence of unrealistically considering the vegetation as a geometric obstacle to water flow. In some cases the topographic model construction requires the identification and digitization of the principal breaklines, such as river banks, ditches and similar natural or artificial structures. The geometrical and topological procedure for the validation of the hydraulic hazard maps is made of two steps. In the first step the whole area is subdivided into fluvial segments, with length chosen as a reasonable trade-off between the need to keep the hydrographical unit as complete as possible, and the need to separate sections of the river bed with significantly different morphology. Each of these segments is made of a single elongated polygon, whose shape can be quite complex, especially for meandering river sections, where the flow direction (i.e. the potential energy gradient associated to the talweg) is often inverted. In the second step the segments are analysed one by one. Therefore, each segment was split into many reaches, so that within any of them the slope of the piezometric line can be approximated to zero. As a consequence, the hydraulic profile (open channel flow) in every reach is assumed horizontal both downslope and on the cross-section. Each reach can be seen as a polygon, delimited laterally by the hazard mapping boundaries and longitudinally by two successive cross sections, usually orthogonal to the talweg line. Simulating the progressive increase of the river stage, with a horizontal piezometric line, allow the definition of the stage-area and stage-volume relationships. Such relationships are obtained exclusively by the geometric information as provided by the high resolution elevation model. The maximum flooded area resulting from the simulation is finally compared to the potentially floodable area described by the hazard maps, to give a flooding index for every reach. Index values lower than 100% show that the mapped hazard area exceeds the maximum floodable area. Very low index values identify spots where there is a significant incongruity between the hazard map and the topography, and where a specific verification is probably needed. The procedure was successfully used for the validation of many hazard maps across Italy.

  14. GWAS with longitudinal phenotypes: performance of approximate procedures

    PubMed Central

    Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel

    2015-01-01

    Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081

  15. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  16. Investigation of the dependence of joint contact forces on musculotendon parameters using a codified workflow for image-based modelling.

    PubMed

    Modenese, Luca; Montefiori, Erica; Wang, Anqi; Wesarg, Stefan; Viceconti, Marco; Mazzà, Claudia

    2018-05-17

    The generation of subject-specific musculoskeletal models of the lower limb has become a feasible task thanks to improvements in medical imaging technology and musculoskeletal modelling software. Nevertheless, clinical use of these models in paediatric applications is still limited for what concerns the estimation of muscle and joint contact forces. Aiming to improve the current state of the art, a methodology to generate highly personalized subject-specific musculoskeletal models of the lower limb based on magnetic resonance imaging (MRI) scans was codified as a step-by-step procedure and applied to data from eight juvenile individuals. The generated musculoskeletal models were used to simulate 107 gait trials using stereophotogrammetric and force platform data as input. To ensure completeness of the modelling procedure, muscles' architecture needs to be estimated. Four methods to estimate muscles' maximum isometric force and two methods to estimate musculotendon parameters (optimal fiber length and tendon slack length) were assessed and compared, in order to quantify their influence on the models' output. Reported results represent the first comprehensive subject-specific model-based characterization of juvenile gait biomechanics, including profiles of joint kinematics and kinetics, muscle forces and joint contact forces. Our findings suggest that, when musculotendon parameters were linearly scaled from a reference model and the muscle force-length-velocity relationship was accounted for in the simulations, realistic knee contact forces could be estimated and these forces were not sensitive the method used to compute muscle maximum isometric force. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  18. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    PubMed

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  19. Intermuscular pterygoid-temporal abscess following inferior alveolar nerve block anesthesia–A computer tomography based navigated surgical intervention: Case report and review

    PubMed Central

    Wallner, Jürgen; Reinbacher, Knut Ernst; Pau, Mauro; Feichtinger, Matthias

    2014-01-01

    Inferior alveolar nerve block (IANB) anesthesia is a common local anesthetic procedure. Although IANB anesthesia is known for its safety, complications can still occur. Today immediately or delayed occurring disorders following IANB anesthesia and their treatment are well-recognized. We present a case of a patient who developed a symptomatic abscess in the pterygoid region as a result of several inferior alveolar nerve injections. Clinical symptoms included diffuse pain, reduced mouth opening and jaw's hypomobility and were persistent under a first step conservative treatment. Since image-based navigated interventions have gained in importance and are used for various procedures a navigated surgical intervention was initiated as a second step therapy. Thus precise, atraumatic surgical intervention was performed by an optical tracking system in a difficult anatomical region. A symptomatic abscess was treated by a computed tomography-based navigated surgical intervention at our department. Advantages and disadvantages of this treatment strategy are evaluated. PMID:24987612

  20. Intermuscular pterygoid-temporal abscess following inferior alveolar nerve block anesthesia-A computer tomography based navigated surgical intervention: Case report and review.

    PubMed

    Wallner, Jürgen; Reinbacher, Knut Ernst; Pau, Mauro; Feichtinger, Matthias

    2014-01-01

    Inferior alveolar nerve block (IANB) anesthesia is a common local anesthetic procedure. Although IANB anesthesia is known for its safety, complications can still occur. Today immediately or delayed occurring disorders following IANB anesthesia and their treatment are well-recognized. We present a case of a patient who developed a symptomatic abscess in the pterygoid region as a result of several inferior alveolar nerve injections. Clinical symptoms included diffuse pain, reduced mouth opening and jaw's hypomobility and were persistent under a first step conservative treatment. Since image-based navigated interventions have gained in importance and are used for various procedures a navigated surgical intervention was initiated as a second step therapy. Thus precise, atraumatic surgical intervention was performed by an optical tracking system in a difficult anatomical region. A symptomatic abscess was treated by a computed tomography-based navigated surgical intervention at our department. Advantages and disadvantages of this treatment strategy are evaluated.

  1. Users guide for the Water Resources Division bibliographic retrieval and report generation system

    USGS Publications Warehouse

    Tamberg, Nora

    1983-01-01

    The WRDBIB Retrieval and Report-generation system has been developed by applying Multitrieve (CSD 1980, Reston) software to bibliographic data files. The WRDBIB data base includes some 9 ,000 records containing bibliographic citations and descriptors of WRD reports released for publication during 1968-1982. The data base is resident in the Reston Multics computer and may be accessed by registered Multics users in the field. The WRDBIB Users Guide provides detailed procedures on how to run retrieval programs using WRDBIB library files, and how to prepare custom bibliographic reports and author indexes. Users may search the WRDBIB data base on the following variable fields as described in the Data Dictionary: Authors, organizational source, title, citation, publication year, descriptors, and the WRSIC (accession) number. The Users Guide provides ample examples of program runs illustrating various retrieval and report generation aspects. Appendices include Multics access and file manipulation procedures; a ' Glossary of Selected Terms'; and a complete ' Retrieval Session ' with step-by-step outlines. (USGS)

  2. Characterization of methyltrimethoxysilane sol-gel polymerization and the resulting aerogels

    NASA Astrophysics Data System (ADS)

    Dong, Hanjiang

    Methyl-functionalized porous silica is of considerable interest as a low dielectric constant film for semiconductor devices. The structural development of these materials appears to affect their gelation behaviors and impact their mechanical properties and shrinkage during processing. 29Si solution NMR was used to follow the structural evolution of MTMS (methyltrimethoxysilane) polymerization to gelation or precipitation, and thus to better understand the species that affect these properties and gelation behaviors. The effects of pH, water concentration, type of solvents, and synthesis procedures (single step acid catalysis and two-step acid/base catalysis) on MTMS polymerization were discussed. The reactivity of silicon species with different connectivity and the extent of cyclization were found to depend appreciably on the pH value of the sol. A kinetic model is presented to treat the reactivity of both silicon species involved in condensations separately based on the inductive and steric effects of these silicon species. Extensive cyclization in the presence of acid, which was attributed to the steric effects among numerous reaction pathways for the first time, prevents MTMS gelation, whereas gels were obtained from the two-step method with nearly random condensations. The experimental degree of condensation (DC) at the gel point using the two-step procedure was determined to be 0.86, which is considerably higher than that predicted by the current accepted theories. Both chemical and physical origins of this high value were suggested. Aerogels dried by supercritical CO2 extraction were characterized by FTIR, 13C and 29Si solid-state NMR and nitrogen sorption. The existence of three residual groups (Si-OH, Si-OCH3, and Si-OC2H5) was confirmed, but their concentrations are very low compared to silica aerogels. The low concentrations of the residual groups, along with the presence of Si-CH3, make MTMS aerogels permanently hydrophobic. To enhance applicability, MTMS aerogels were successfully prepared that demonstrated shrinkage less than 10% after supercritical drying; proving that the rigidity of the gel network is not the sole factor, suggesting in the literature, to cause the huge shrinkage in many hybrid aerogels reported. An important finding of this work is that MTMS aerogels can be prepared without tedious solvent exchange and surface modification if the molar ratio of water/MTMS increases to 8, substantially reducing the cost of aerogel production. This result was attributed to MTMS's fully condensation and low concentrations of ring species.

  3. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    PubMed

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  4. Room temperature rubbing for few-layer two-dimensional thin flakes directly on flexible polymer substrates

    PubMed Central

    Yu, Yan; Jiang, Shenglin; Zhou, Wenli; Miao, Xiangshui; Zeng, Yike; Zhang, Guangzu; Liu, Sisi

    2013-01-01

    The functional layers of few-layer two-dimensional (2-D) thin flakes on flexible polymers for stretchable applications have attracted much interest. However, most fabrication methods are “indirect” processes that require transfer steps. Moreover, previously reported “transfer-free” methods are only suitable for graphene and not for other few-layer 2-D thin flakes. Here, a friction based room temperature rubbing method is proposed for fabricating different types of few-layer 2-D thin flakes (graphene, hexagonal boron nitride (h-BN), molybdenum disulphide (MoS2), and tungsten disulphide (WS2)) on flexible polymer substrates. Commercial 2-D raw materials (graphite, h-BN, MoS2, and WS2) that contain thousands of atom layers were used. After several minutes, different types of few-layer 2-D thin flakes were fabricated directly on the flexible polymer substrates by rubbing procedures at room temperature and without any transfer step. These few-layer 2-D thin flakes strongly adhere to the flexible polymer substrates. This strong adhesion is beneficial for future applications. PMID:24045289

  5. A two-step framework for reconstructing remotely sensed land surface temperatures contaminated by cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Chao; Long, Di; Shen, Huanfeng; Wu, Penghai; Cui, Yaokui; Hong, Yang

    2018-07-01

    Land surface temperature (LST) is one of the most important parameters in land surface processes. Although satellite-derived LST can provide valuable information, the value is often limited by cloud contamination. In this paper, a two-step satellite-derived LST reconstruction framework is proposed. First, a multi-temporal reconstruction algorithm is introduced to recover invalid LST values using multiple LST images with reference to corresponding remotely sensed vegetation index. Then, all cloud-contaminated areas are temporally filled with hypothetical clear-sky LST values. Second, a surface energy balance equation-based procedure is used to correct for the filled values. With shortwave irradiation data, the clear-sky LST is corrected to the real LST under cloudy conditions. A series of experiments have been performed to demonstrate the effectiveness of the developed approach. Quantitative evaluation results indicate that the proposed method can recover LST in different surface types with mean average errors in 3-6 K. The experiments also indicate that the time interval between the multi-temporal LST images has a greater impact on the results than the size of the contaminated area.

  6. Management of fibromyalgia syndrome – an interdisciplinary evidence-based guideline

    PubMed Central

    Häuser, Winfried; Arnold, Bernhard; Eich, Wolfgang; Felde, Eva; Flügge, Christl; Henningsen, Peter; Herrmann, Markus; Köllner, Volker; Kühn, Edeltraud; Nutzinger, Detlev; Offenbächer, Martin; Schiltenwolf, Marcus; Sommer, Claudia; Thieme, Kati; Kopp, Ina

    2008-01-01

    The prevalence of fibromyalgia syndrome (FMS) of 1–2% in the general population associated with high disease-related costs and the conflicting data on treatment effectiveness had led to the development of evidence-based guidelines designed to provide patients and physicians guidance in selecting among the alternatives. Until now no evidence-based interdisciplinary (including patients) guideline for the management of FMS was available in Europe. Therefore a guideline for the management of fibromyalgia syndrome (FMS) was developed by 13 German medical and psychological associations and two patient self-help organisations. The task was coordinated by two German scientific umbrella organisations, the Association of the Scientific Medical Societies in Germany AWMF and the German Interdisciplinary Association of Pain Therapy DIVS. A systematic search of the literature including all controlled studies, systematic reviews and meta-analyses of pharmacological and non-pharmacological treatments of FMS was performed in the Cochrane Library (1993–12/2006), Medline (1980–12/2006), PsychInfo (1966–12/2006) and Scopus (1980–12/ 2006). Levels of evidence were assigned according to the classification system of the Oxford-Centre for Evidence Based Medicine. Grading of the strengths of recommendations was done according to the German program for disease management guidelines. Standardized procedures were used to reach a consensus on recommendations. The guideline was reviewed and finally approved by the boards of the societies involved and published online by the AWMF on april 25, 2008: http://www.uni-duesseldorf.de/AWMF/ll/041-004.htm. A short version of the guideline for patients is available as well: http://www.uni-duesseldorf.de/AWMF/ll/041-004p.htm. The following procedures in the management of FMS were strongly recommended: information on diagnosis and therapeutic options and patient-centered communication, aerobic exercise, cognitive and operant behavioural therapy, multicomponent treatment and amitriptyline. Based on expert opinion, a stepwise FMS-management was proposed. Step 1 comprises confirming the diagnosis and patient education and treatment of physical or mental comorbidities or aerobic exercise or cognitive behavioural therapy or amitriptyline. Step 2 includes multicomponent treatment. Step 3 comprises no further treatment or self-management (aerobic exercise, stress management) and/or booster multicomponent therapy and/or pharmacological therapy (duloxetine or fluoxetine or paroxetine or pregabalin or tramadol/aminoacetophen) and/or psychotherapy (hypnotherapy or written emotional disclosure) and/or physical therapy (balneotherapy or whole body heat therapy) and/or complementary therapies (homoeopathy or vegetarian diet). The choice of treatment options should be based on informed decision-making and respect of the patients’ preferences. PMID:19675740

  7. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    PubMed

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  8. A near-optimum procedure for selecting stations in a streamgaging network

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    2005-01-01

    Two questions are fundamental to Federal government goals for a network of streamgages which are operated by the U.S. Geological Survey: (1) how well does the present network of streamagaging stations meet defined Federal goals and (2) what is the optimum set of stations to add or reactivate to support remaining goals? The solution involves an incremental-stepping procedure that is based on Basic Feasible Incremental Solutions (BFIS?s) where each BFIS satisfies at least one Federal streamgaging goal. A set of minimum Federal goals for streamgaging is defined to include water measurements for legal compacts and decrees, flooding, water budgets, regionalization of streamflow characteristics, and water quality. Fully satisfying all these goals by using the assumptions outlined in this paper would require adding 887 new streamgaging stations to the U.S. Geological Survey network and reactivating an additional 857 stations that are currently inactive.

  9. Mapping sea ice leads with a coupled numeric/symbolic system

    NASA Technical Reports Server (NTRS)

    Key, J.; Schweiger, A. J.; Maslanik, J. A.

    1990-01-01

    A method is presented which facilitates the detection and delineation of leads with single-channel Landsat data by coupling numeric and symbolic procedures. The procedure consists of three steps: (1) using the dynamic threshold method, an image is mapped to a lead/no lead binary image; (2) the likelihood of fragments to be real leads is examined with a set of numeric rules; and (3) pairs of objects are examined geometrically and merged where possible. The processing ends when all fragments are merged and statistical characteristics are determined, and a map of valid lead objects are left which summarizes useful physical in the lead complexes. Direct implementation of domain knowledge and rapid prototyping are two benefits of the rule-based system. The approach is found to be more successfully applied to mid- and high-level processing, and the system can retrieve statistics about sea-ice leads as well as detect the leads.

  10. Mutual information, neural networks and the renormalization group

    NASA Astrophysics Data System (ADS)

    Koch-Janusz, Maciej; Ringel, Zohar

    2018-06-01

    Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.

  11. Development and Application of a Two-Tier Diagnostic Test for High School Students' Understanding of Flowering Plant Growth and Development

    ERIC Educational Resources Information Center

    Lin, Sheau-Wen

    2004-01-01

    This study involved the development and application of a two-tier diagnostic test measuring students' understanding of flowering plant growth and development. The instrument development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development.…

  12. Development and Application of a Two-Tier Multiple-Choice Diagnostic Test for High School Students' Understanding of Cell Division and Reproduction

    ERIC Educational Resources Information Center

    Sesli, Ertugrul; Kara, Yilmaz

    2012-01-01

    This study involved the development and application of a two-tier diagnostic test for measuring students' understanding of cell division and reproduction. The instrument development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development.…

  13. Urban Groundwater Mapping - Bucharest City Area Case Study

    NASA Astrophysics Data System (ADS)

    Gaitanaru, Dragos; Radu Gogu, Constantin; Bica, Ioan; Anghel, Leonard; Amine Boukhemacha, Mohamed; Ionita, Angela

    2013-04-01

    Urban Groundwater Mapping (UGM) is a generic term for a collection of procedures and techniques used to create targeted cartographic representation of the groundwater related aspects in urban areas. The urban environment alters the physical and chemical characteristics of the underneath aquifers. The scale of the pressure is controlled by the urban development in time and space. To have a clear image on the spatial and temporal distribution of different groundwater- urban structures interaction we need a set of thematic maps is needed. In the present study it is described the methodological approach used to obtain a reliable cartographic product for Bucharest City area. The first step in the current study was to identify the groundwater related problems and aspects (changes in the groundwater table, infiltration and seepage from and to the city sewer network, contamination spread to all three aquifers systems located in quaternary sedimentary formations, dewatering impact for large underground structures, management and political drawbacks). The second step was data collection and validation. In urban areas there is a big spectrum of data providers related to groundwater. Due to the fact that data is produced and distributed by different types of organizations (national agencies, private companies, municipal water regulator, etc) the validation and cross check process is mandatory. The data is stored and managed by a geospatial database. The design of the database follows an object-orientated paradigm and is easily extensible. The third step consists of a set of procedures based on a multi criteria assessment that creates the specific setup for the thematic maps. The assessment is based on the following criteria: (1) scale effect , (2) time , (3) vertical distribution and (4) type of the groundwater related problem. The final step is the cartographic representation. In this final step the urban groundwater maps are created. All the methodological steps are doubled by programmed procedures developed in a groundwater management platform for urban areas. The core of the procedures is represented by a set of well defined hydrogeological set of geospatial queries. The cartographic products (urban groundwater maps) can be used by different types of users: civil engineers, urban planners, scientist as well as decision and policies makers.

  14. Developing Cognitive Task Analysis-based Educational Videos for Basic Surgical Skills in Plastic Surgery.

    PubMed

    Yeung, Celine; McMillan, Catherine; Saun, Tomas J; Sun, Kimberly; D'hondt, Veerle; von Schroeder, Herbert P; Martou, Glykeria; Lee, Matthew; Liao, Elizabeth; Binhammer, Paul

    To describe the development of cognitive task analysis (CTA)-based multimedia educational videos for surgical trainees in plastic surgery. A needs assessment survey was used to identify 5 plastic surgery skills on which to focus the educational videos. Three plastic surgeons were video-recorded performing each skill while describing the procedure, and were interviewed with probing questions. Three medical student reviewers coded transcripts and categorized each step into "action," "decision," or "assessment," and created a cognitive demands table (CDT) for each skill. The CDTs were combined into 1 table that was reviewed by the surgeons performing each skill to ensure accuracy. The final CDTs were compared against each surgeon's original transcripts. The total number of steps identified, percentage of steps shared, and the average percentage of steps omitted were calculated. Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada, an urban tertiary care teaching center. Canadian junior plastic surgery residents (n = 78) were sent a needs assessment survey. Four plastic surgeons and 1 orthopedic surgeon performed the skills. Twenty-eight residents responded to the survey (36%). Subcuticular suturing, horizontal and vertical mattress suturing, hand splinting, digital nerve block, and excisional biopsy had the most number of residents (>80%) rank the skills as being skills that students should be able to perform before entering residency. The number of steps identified through CTA ranged from 12 to 29. Percentage of steps shared by all 3 surgeons for each skill ranged from 30% to 48%, while the average percentage of steps that were omitted by each surgeon ranged from 27% to 40%. Instructional videos for basic surgical skills may be generated using CTA to help experts provide comprehensive descriptions of a procedure. A CTA-based educational tool may give trainees access to a broader, objective body of knowledge, allowing them to learn decision-making processes before entering the operating room. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Technology of welding aluminum alloys-II

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Step-by-step procedures were developed for high integrity manual and machine welding of aluminum alloys. Detailed instructions are given for each step with tables and graphs to specify materials and dimensions. Throughout work sequence, processing procedure designates manufacturing verification points and inspection points.

  16. Cleaning and activation of beryllium-copper electron multiplier dynodes.

    NASA Technical Reports Server (NTRS)

    Pongratz, M. B.

    1972-01-01

    Description of a cleaning and activation procedure followed in preparing beryllium-copper dynodes for electron multipliers used in sounding-rocket experiments to detect auroral electrons. The initial degreasing step involved a 5-min bath in trichloroethylene in an ultrasonic cleaner. This was followed by an ultrasonic rinse in methanol and by a two-step acid pickling treatment to remove the oxides. Additional rinsing in water and methanol was followed by activation in a stainless-steel RF induction oven.

  17. A Guide for Developing Standard Operating Job Procedures for the Pump Station Process Wastewater Treatment Facility. SOJP No. 3.

    ERIC Educational Resources Information Center

    Perley, Gordon F.

    This is a guide for standard operating job procedures for the pump station process of wastewater treatment plants. Step-by-step instructions are given for pre-start up inspection, start-up procedures, continuous routine operation procedures, and shut-down procedures. A general description of the equipment used in the process is given. Two…

  18. Electrostatic design of protein-protein association rates.

    PubMed

    Schreiber, Gideon; Shaul, Yossi; Gottschalk, Kay E

    2006-01-01

    De novo design and redesign of proteins and protein complexes have made promising progress in recent years. Here, we give an overview of how to use available computer-based tools to design proteins to bind faster and tighter to their protein-complex partner by electrostatic optimization between the two proteins. Electrostatic optimization is possible because of the simple relation between the Debye-Huckel energy of interaction between a pair of proteins and their rate of association. This can be used for rapid, structure-based calculations of the electrostatic attraction between the two proteins in the complex. Using these principles, we developed two computer programs that predict the change in k(on), and as such the affinity, on introducing charged mutations. The two programs have a web interface that is available at www.weizmann.ac.il/home/bcges/PARE.html and http://bip.weizmann.ac.il/hypare. When mutations leading to charge optimization are introduced outside the physical binding site, the rate of dissociation is unchanged and therefore the change in k(on) parallels that of the affinity. This design method was evaluated on a number of different protein complexes resulting in binding rates and affinities of hundreds of fold faster and tighter compared to wild type. In this chapter, we demonstrate the procedure and go step by step over the methodology of using these programs for protein-association design. Finally, the way to easily implement the principle of electrostatic design for any protein complex of choice is shown.

  19. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  20. Two-step protocol for isolation and culture of cardiospheres.

    PubMed

    Chen, Lijuan; Pan, Yaohua; Zhang, Lan; Wang, Yingjie; Weintraub, Neal; Tang, Yaoliang

    2013-01-01

    Cardiac progenitor cells (CPC) are a unique pool of progenitor cells residing in the heart that play an important role in cardiac homeostasis and physiological cardiovascular cell turnover during acute myocardial infarction (MI). Transplanting CPC into the heart has shown promise in two recent clinical trials of cardiac repair (SCIPIO & CADUCEUS). CSCs were originally isolated directly from enzymatically digested hearts followed by cell sorting using stem cell markers. However, long exposure to enzymatic digestion can affect the integrity of stem cell markers on the cell surface and also compromise stem cell function. Here, we describe a two-step procedure in which a large number of intact cardiac progenitor cells can be purified from small amount of heart tissue.

  1. Local Education Agency Planning Analyst's Procedures. A Vocational Education Planning System for Local School Districts. Volume III.

    ERIC Educational Resources Information Center

    Goldman, Charles I.

    The manual is part of a series to assist in planning procedures for local and State vocational agencies. It details steps required to process a local education agency's data after the data have been coded onto keypunch forms. Program, course, and overhead data are input into a computer data base and error checks are performed. A computer model is…

  2. System Design Considerations for Microcomputer Based Instructional Laboratories.

    DTIC Science & Technology

    1986-04-01

    when wrong procedures are tried as well as correct procedures. This is sometimes called " free play " simulation. While this form of simulation...steps are performed correctly. Unlike " free play " system simulations, the student must perform the operation in an approved manner. 28 V. Technical...Supports free play exercises o Typically does not tutor a student o Used for skill development and performance measurement Task Simulation o Computer

  3. Students' perceived experience of university admission based on tests and interviews.

    PubMed

    Röding, K; Nordenram, G

    2005-11-01

    The aim of the study was to generate an impression, from the perspective of graduating dental students, of the individualised admissions procedures, which they had undergone 5 years before. The subjects comprised 10 randomly selected students, five male and five female, from two different admission rounds. Qualitative research was used and data were collected by means of semi-structured interviews. The results show that even 5 years later, the students remember clearly the different steps in the selection procedure and they found the procedure relevant. In particular, the admission interviews made a lasting impression. The students consider that being interviewed by one admissions committee member at a time reduces the applicant's apprehension and allows a more personal interview. Several believe that the admissions procedure influences academic achievement or improves self-confidence: implicit in their selection by a committee of experienced professionals is affirmation that they have the potential to become good dentists. The students therefore feel encouraged to aspire to higher achievement. All students believe that motivation is an important non-cognitive attribute for success and that students selected through this mode are not only highly motivated but also well informed, with realistic expectations of the undergraduate programme and their future professional career.

  4. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    PubMed Central

    Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas

    2009-01-01

    Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170

  5. A Finite Element Procedure for Calculating Fluid-Structure Interaction Using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen; Gartmeier, Otto

    1990-01-01

    This report is intended to serve two purposes. The first is to present a survey of the theoretical background of the dynamic interaction between a non-viscid, compressible fluid and an elastic structure is presented. Section one presents a short survey of the application of the finite element method (FEM) to the area of fluid-structure-interaction (FSI). Section two describes the mathematical foundation of the structure and fluid with special emphasis on the fluid. The main steps in establishing the finite element (FE) equations for the fluid structure coupling are discussed in section three. The second purpose is to demonstrate the application of MSC/NASTRAN to the solution of FSI problems. Some specific topics, such as fluid structure analogy, acoustic absorption, and acoustic contribution analysis are described in section four. Section five deals with the organization of the acoustic procedure flowchart. Section six includes the most important information that a user needs for applying the acoustic procedure to practical FSI problems. Beginning with some rules concerning the FE modeling of the coupled system, the NASTRAN USER DECKs for the different steps are described. The goal of section seven is to demonstrate the use of the acoustic procedure with some examples. This demonstration includes an analytic verification of selected FE results. The analytical description considers only some aspects of FSI and is not intended to be mathematically complete. Finally, section 8 presents an application of the acoustic procedure to vehicle interior acoustic analysis with selected results.

  6. Quantitative phase imaging using four interferograms with special phase shifts by dual-wavelength in-line phase-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoqing; Wang, Yawei; Ji, Ying; Xu, Yuanyuan; Xie, Ming; Han, Hao

    2018-05-01

    A new approach of quantitative phase imaging using four interferograms with special phase shifts in dual-wavelength in-line phase-shifting interferometry is presented. In this method, positive negative 2π phase shifts are employed to easily separate the incoherent addition of two single-wavelength interferograms by combining the phase-shifting technique with the subtraction procedure, then the quantitative phase at one of both wavelengths can be achieved based on two intensities without the corresponding dc terms by the use of the character of the trigonometric function. The quantitative phase of the other wavelength can be retrieved from two dc-term suppressed intensities obtained by employing the two-step phase-shifting technique or the filtering technique in the frequency domain. The proposed method is illustrated with theory, and its effectiveness is demonstrated by simulation experiments of the spherical cap and the HeLa cell, respectively.

  7. Comparison of the DNA extraction methods for polymerase chain reaction amplification from formalin-fixed and paraffin-embedded tissues.

    PubMed

    Sato, Y; Sugie, R; Tsuchiya, B; Kameya, T; Natori, M; Mukai, K

    2001-12-01

    To obtain an adequate quality and quantity of DNA from formalin-fixed and paraffin-embedded tissue, six different DNA extraction methods were compared. Four methods used deparaffinization by xylene followed by proteinase K digestion and phenol-chloroform extraction. The temperature of the different steps was changed to obtain higher yields and improved quality of extracted DNA. The remaining two methods used microwave heating for deparaffinization. The best DNA extraction method consisted of deparaffinization by microwave irradiation, protein digestion with proteinase K at 48 degrees C overnight, and no further purification steps. By this method, the highest DNA yield was obtained and the amplification of a 989-base pair beta-globin gene fragment was achieved. Furthermore, DNA extracted by means of this procedure from five gastric carcinomas was successfully used for single strand conformation polymorphism and direct sequencing assays of the beta-catenin gene. Because the microwave-based DNA extraction method presented here is simple, has a lower contamination risk, and results in a higher yield of DNA compared with the ordinary organic chemical reagent-based extraction method, it is considered applicable to various clinical and basic fields.

  8. Improved Cryopreservation of Human Umbilical Vein Endothelial Cells: A Systematic Approach

    NASA Astrophysics Data System (ADS)

    Sultani, A. Billal; Marquez-Curtis, Leah A.; Elliott, Janet A. W.; McGann, Locksley E.

    2016-10-01

    Cryopreservation of human umbilical vein endothelial cells (HUVECs) facilitated their commercial availability for use in vascular biology, tissue engineering and drug delivery research; however, the key variables in HUVEC cryopreservation have not been comprehensively studied. HUVECs are typically cryopreserved by cooling at 1 °C/min in the presence of 10% dimethyl sulfoxide (DMSO). We applied interrupted slow cooling (graded freezing) and interrupted rapid cooling with a hold time (two-step freezing) to identify where in the cooling process cryoinjury to HUVECs occurs. We found that linear cooling at 1 °C/min resulted in higher membrane integrities than linear cooling at 0.2 °C/min or nonlinear two-step freezing. DMSO addition procedures and compositions were also investigated. By combining hydroxyethyl starch with DMSO, HUVEC viability after cryopreservation was improved compared to measured viabilities of commercially available cryopreserved HUVECs and viabilities for HUVEC cryopreservation studies reported in the literature. Furthermore, HUVECs cryopreserved using our improved procedure showed high tube forming capability in a post-thaw angiogenesis assay, a standard indicator of endothelial cell function. As well as presenting superior cryopreservation procedures for HUVECs, the methods developed here can serve as a model to optimize the cryopreservation of other cells.

  9. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for the entire study area.

  10. Neural networks for vertical microcode compaction

    NASA Astrophysics Data System (ADS)

    Chu, Pong P.

    1992-09-01

    Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.

  11. Method for depleting BWRs using optimal control rod patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less

  12. Transparent ZnO-based ohmic contact to p-GaN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaminska, E.; Piotrowska, A.; Golaszewska, K.

    2002-04-09

    Highly conductive ZnO films were fabricated on p-GaN in a two-step process. First, zinc was thermally evaporated on p-GaN. Next, zinc film was oxidized in oxygen flow. To increase the conductivity of ZnO, nitrogen was introduced into zinc during its deposition. The above procedure proved successful in fabricating ZnO of the resistivity of {approx}1 x 10{sup -3} {Omega}cm and resulted in ohmic contacts of resistivity {approx}1 x 10{sup -2} {Omega}cm{sup 2} to low-doped p-GaN, and light transmittance of {approx}75% in the wavelength range of 400-700 nm.

  13. An improved method to unravel phosphoacceptors in Ser/Thr protein kinase-phosphorylated substrates.

    PubMed

    Molle, Virginie; Leiba, Jade; Zanella-Cléon, Isabelle; Becchi, Michel; Kremer, Laurent

    2010-11-01

    Identification of the phosphorylated residues of bacterial Ser/Thr protein kinase (STPK) substrates still represents a challenging task. Herein, we present a new strategy allowing the rapid determination of phosphoacceptors in kinase substrates, essentially based on the dual expression of the kinase with its substrate in the surrogate E. coli, followed by MS analysis in a single-step procedure. The performance of this strategy is illustrated using two distinct proteins from Mycobacterium tuberculosis as model substrates, the GroEL2 and HspX chaperones. A comparative analysis with a standard method that includes mass spectrometry analysis of in vitro phosphorylated substrates is also addressed.

  14. Analytical solutions for systems of partial differential-algebraic equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2014-01-01

    This work presents the application of the power series method (PSM) to find solutions of partial differential-algebraic equations (PDAEs). Two systems of index-one and index-three are solved to show that PSM can provide analytical solutions of PDAEs in convergent series form. What is more, we present the post-treatment of the power series solutions with the Laplace-Padé (LP) resummation method as a useful strategy to find exact solutions. The main advantage of the proposed methodology is that the procedure is based on a few straightforward steps and it does not generate secular terms or depends of a perturbation parameter.

  15. Advanced asymmetric synthesis of (1R,2S)-1-amino-2-vinylcyclopropanecarboxylic acid by alkylation/cyclization of newly designed axially chiral Ni(II) complex of glycine Schiff base.

    PubMed

    Kawashima, Aki; Shu, Shuangjie; Takeda, Ryosuke; Kawamura, Akie; Sato, Tatsunori; Moriwaki, Hiroki; Wang, Jiang; Izawa, Kunisuke; Aceña, José Luis; Soloshonok, Vadim A; Liu, Hong

    2016-04-01

    Asymmetric synthesis of (1R,2S)-1-amino-2-vinylcyclopropanecarboxylic acid (vinyl-ACCA) is in extremely high demand due to the pharmaceutical importance of this tailor-made, sterically constrained α-amino acid. Here we report the development of an advanced procedure for preparation of the target amino acid via two-step SN2 and SN2' alkylation of novel axially chiral nucleophilic glycine equivalent. Excellent yields and diastereoselectivity coupled with reliable and easy scalability render this method of immediate use for practical synthesis of (1R,2S)-vinyl-ACCA.

  16. In vitro biofilm formation on resin-based composites after different finishing and polishing procedures.

    PubMed

    Cazzaniga, Gloria; Ottobelli, Marco; Ionescu, Andrei C; Paolone, Gaetano; Gherlone, Enrico; Ferracane, Jack L; Brambilla, Eugenio

    2017-12-01

    To evaluate the influence of surface treatments of different resin-based composites (RBCs) on S. mutans biofilm formation. 4 RBCs (microhybrid, nanohybrid, nanofilled, bulk-filled) and 6 finishing-polishing (F/P) procedures (open-air light-curing, light-curing against Mylar strip, aluminum oxide discs, one-step rubber point, diamond bur, multi-blade carbide bur) were evaluated. Surface roughness (SR) (n=5/group), gloss (n=5/group), scanning electron microscopy morphological analysis (SEM), energy-dispersive X-ray spectrometry (EDS) (n=3/group), and S. mutans biofilm formation (n=16/group) were assessed. EDS analysis was repeated after the biofilm assay. A morphological evaluation of S. mutans biofilm was also performed using confocal laser-scanning microscopy (CLSM) (n=2/group). The data were analyzed using Wilcoxon (SR, gloss) and two-way ANOVA with Tukey as post-hoc tests (EDS, biofilm formation). F/P procedures as well as RBCs significantly influenced SR and gloss. While F/P procedures did not significantly influence S. mutans biofilm formation, a significant influence of RBCs on the same parameter was found. Different RBCs showed different surface elemental composition. Both F/P procedures and S. mutans biofilm formation significantly modified this parameter. The tested F/P procedures significantly influenced RBCs surface properties but did not significantly affect S. mutans biofilm formation. The significant influence of the different RBCs tested on S. mutans biofilm formation suggests that material characteristics and composition play a greater role than SR. F/P procedures of RBCs may unexpectedly play a minor role compared to that of the restoration material itself in bacterial colonization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Development of a modularized two-step (M2S) chromosome integration technique for integration of multiple transcription units in Saccharomyces cerevisiae.

    PubMed

    Li, Siwei; Ding, Wentao; Zhang, Xueli; Jiang, Huifeng; Bi, Changhao

    2016-01-01

    Saccharomyces cerevisiae has already been used for heterologous production of fuel chemicals and valuable natural products. The establishment of complicated heterologous biosynthetic pathways in S. cerevisiae became the research focus of Synthetic Biology and Metabolic Engineering. Thus, simple and efficient genomic integration techniques of large number of transcription units are demanded urgently. An efficient DNA assembly and chromosomal integration method was created by combining homologous recombination (HR) in S. cerevisiae and Golden Gate DNA assembly method, designated as modularized two-step (M2S) technique. Two major assembly steps are performed consecutively to integrate multiple transcription units simultaneously. In Step 1, Modularized scaffold containing a head-to-head promoter module and a pair of terminators was assembled with two genes. Thus, two transcription units were assembled with Golden Gate method into one scaffold in one reaction. In Step 2, the two transcription units were mixed with modules of selective markers and integration sites and transformed into S. cerevisiae for assembly and integration. In both steps, universal primers were designed for identification of correct clones. Establishment of a functional β-carotene biosynthetic pathway in S. cerevisiae within 5 days demonstrated high efficiency of this method, and a 10-transcriptional-unit pathway integration illustrated the capacity of this method. Modular design of transcription units and integration elements simplified assembly and integration procedure, and eliminated frequent designing and synthesis of DNA fragments in previous methods. Also, by assembling most parts in Step 1 in vitro, the number of DNA cassettes for homologous integration in Step 2 was significantly reduced. Thus, high assembly efficiency, high integration capacity, and low error rate were achieved.

  18. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  19. Literature Reference for Influenza H5N1 (Emerging Infectious Diseases. 2005. 11(8): 1303–1305)

    EPA Pesticide Factsheets

    Procedures are described for analysis of clinical samples and may be adapted for assessment of solid, particulate, aerosol, liquid and water samples. This is a two-step, real-time reverse transcriptase-PCR multiplex assay.

  20. Shear bond strength of one-step self-etch adhesives to enamel: effect of acid pretreatment.

    PubMed

    Poggio, Claudio; Scribante, Andrea; Della Zoppa, Federica; Colombo, Marco; Beltrami, Riccardo; Chiesa, Marco

    2014-02-01

    The purposes of this study were to evaluate the effect of surface pretreatment with phosphoric acid on the enamel bond strength of four-one-step self-etch adhesives with different pH values. One hundred bovine permanent mandibular incisors were used. The materials used in this study included four-one-step self-etch adhesives with different pH values: Adper(™) Easy Bond Self-Etch Adhesive (ph = 0,8-1), Futurabond NR (ph = 1,4), G-aenial Bond (ph = 1,5), Clearfil(3) S Bond (ph = 2,7). One two-step self-etch adhesive (Clearfil SE Bond/ph = 0,8-1) was used as control. The teeth were assigned into two subgroups according to bonding procedure. In the first subgroup (n = 50), no pretreatment agent was applied. In the second subgroup (n = 50), etching was performed using 37% phosphoric acid for 30 s. After adhesive systems application, a nanohybrid composite resin was inserted into the enamel surface. The specimens were placed in a universal testing machine (Model 3343, Instron Corp., Canton, Mass., USA). After the testing procedure, the fractured surfaces were examined with an optical microscope at a magnification of 10× to determine failure modes. The adhesive remnant index (ARI) was used to assess the amount of adhesive left on the enamel surface. Descriptive statistics of the shear bond strength and frequency distribution of ARI scores were calculated. Enamel pretreatment with phosphoric acid significantly increased bond strength values of all the adhesives tested. No significant differences in bond strength were detected among the four different one-step self-etch adhesives with different pH. Two-step self-etch adhesive showed the highest bond strength. © 2013 John Wiley & Sons A/S.

  1. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  2. Data Cleaning in Mathematics Education Research: The Overlooked Methodological Step

    ERIC Educational Resources Information Center

    Hubbard, Aleata

    2017-01-01

    The results of educational research studies are only as accurate as the data used to produce them. Drawing on experiences conducting large-scale efficacy studies of classroom-based algebra interventions for community college and middle school students, I am developing practice-based data cleaning procedures to support scholars in conducting…

  3. Development of a Computer-Based Measure of Listening Comprehension of Science Talk

    ERIC Educational Resources Information Center

    Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien

    2015-01-01

    The purpose of this study was to develop a computer-based assessment for elementary school students' listening comprehension of science talk within an inquiry-oriented environment. The development procedure had 3 steps: a literature review to define the framework of the test, collecting and identifying key constructs of science talk, and…

  4. On contact modelling in isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Cardoso, R. P. R.; Adetoro, O. B.

    2017-11-01

    IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.

  5. The NASA Constellation Program Procedure System

    NASA Technical Reports Server (NTRS)

    Phillips, Robert G.; Wang, Lui

    2010-01-01

    NASA has used procedures to describe activities to be performed onboard vehicles by astronaut crew and on the ground by flight controllers since Apollo. Starting with later Space Shuttle missions and the International Space Station, NASA moved forward to electronic presentation of procedures. For the Constellation Program, another large step forward is being taken - to make procedures more interactive with the vehicle and to assist the crew in controlling the vehicle more efficiently and with less error. The overall name for the project is the Constellation Procedure Applications Software System (CxPASS). This paper describes some of the history behind this effort, the key concepts and operational paradigms that the work is based upon, and the actual products being developed to implement procedures for Constellation

  6. Interdisciplinary cognitive task analysis: a strategy to develop a comprehensive endoscopic retrograde cholangiopancreatography protocol for use in fellowship training.

    PubMed

    Canopy, Erin; Evans, Matt; Boehler, Margaret; Roberts, Nicole; Sanfey, Hilary; Mellinger, John

    2015-10-01

    Endoscopic retrograde cholangiopancreatography is a challenging procedure performed by surgeons and gastroenterologists. We employed cognitive task analysis to identify steps and decision points for this procedure. Standardized interviews were conducted with expert gastroenterologists (7) and surgeons (4) from 4 institutions. A procedural step and cognitive decision point protocol was created from audio-taped transcriptions and was refined by 5 additional surgeons. Conceptual elements, sequential actions, and decision points were iterated for 5 tasks: patient preparation, duodenal intubation, selective cannulation, imaging interpretation with related therapeutic intervention, and complication management. A total of 180 steps were identified. Gastroenterologists identified 34 steps not identified by surgeons, and surgeons identified 20 steps not identified by gastroenterologists. The findings suggest that for complex procedures performed by diverse practitioners, more experts may help delineate distinctive emphases differentiated by training background and type of practice. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  8. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    PubMed Central

    Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-01-01

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further. PMID:29186922

  9. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.

    PubMed

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-11-27

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  10. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  11. A step-up test procedure to find the minimum effective dose.

    PubMed

    Wang, Weizhen; Peng, Jianan

    2015-01-01

    It is of great interest to find the minimum effective dose (MED) in dose-response studies. A sequence of decreasing null hypotheses to find the MED is formulated under the assumption of nondecreasing dose response means. A step-up multiple test procedure that controls the familywise error rate (FWER) is constructed based on the maximum likelihood estimators for the monotone normal means. When the MED is equal to one, the proposed test is uniformly more powerful than Hsu and Berger's test (1999). Also, a simulation study shows a substantial power improvement for the proposed test over four competitors. Three R-codes are provided in Supplemental Materials for this article. Go to the publishers online edition of Journal of Biopharmaceutical Statistics to view the files.

  12. Protocol for Detection of Yersinia pestis in Environmental ...

    EPA Pesticide Factsheets

    Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.

  13. On the wing behaviour of the overtones of self-localized modes

    NASA Astrophysics Data System (ADS)

    Dusi, R.; Wagner, M.

    1998-08-01

    In this paper the solutions for self-localized modes in a nonlinear chain are investigated. We present a converging iteration procedure, which is based on analytical information of the wings and which takes into account higher overtones of the solitonic oscillations. The accuracy is controlled in a step by step manner by means of a Gaussian error analysis. Our numerical procedure allows for highly accurate solutions, in all anharmonicity regimes, and beyond the rotating-wave approximation (RWA). It is found that the overtone wings change their analytical behaviour at certain critical values of the energy of the self-localized mode: there is a turnover in the exponent of descent. The results are shown for a Fermi-Pasta-Ulam (FPU) chain with quartic anharmonicity.

  14. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  15. Development of quality indicators for physiotherapy for patients with PAOD in the Netherlands: a Delphi study.

    PubMed

    Gijsbers, H J H; Lauret, G J; van Hofwegen, A; van Dockum, T A; Teijink, J A W; Hendriks, H J M

    2016-06-01

    The aim of the study was to develop quality indicators (QIs) for physiotherapy management of patients with intermittent claudication (IC) in the Netherlands. As part of an international six-step method to develop QIs, an online survey Delphi-procedure was completed. After two Delphi-rounds a validation round was performed. Twenty-six experts were recruited to participate in this study. Twenty-four experts completed two Delphi-rounds. A third round was conducted inviting 1200 qualified and registered physiotherapists of the Dutch integrated care network 'Claudicationet' to validate a draft set of quality indicators. Out of 83 potential QIs in the Dutch physiotherapy guideline on 'Intermittent claudication', consensus among the experts selected nine indicators. All nine quality indicators were validated by 300 physiotherapists. A final set of nine indicators was derived from (1) a Dutch evidence-based physiotherapy guideline, (2) an expert Delphi procedure and (3) a validation by 300 physiotherapists. This set of indicators should be validated in clinical practice. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  16. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  17. Computer aided planning of orthopaedic surgeries: the definition of generic planning steps for bone removal procedures.

    PubMed

    Putzer, David; Moctezuma, Jose Luis; Nogler, Michael

    2017-11-01

    An increasing number of orthopaedic surgeons are using computer aided planning tools for bone removal applications. The aim of the study was to consolidate a set of generic functions to be used for a 3D computer assisted planning or simulation. A limited subset of 30 surgical procedures was analyzed and verified in 243 surgical procedures of a surgical atlas. Fourteen generic functions to be used in 3D computer assisted planning and simulations were extracted. Our results showed that the average procedure comprises 14 ± 10 (SD) steps with ten different generic planning steps and four generic bone removal steps. In conclusion, the study shows that with a limited number of 14 planning functions it is possible to perform 243 surgical procedures out of Campbell's Operative Orthopedics atlas. The results may be used as a basis for versatile generic intraoperative planning software.

  18. Evaluation of atomic pressure in the multiple time-step integration algorithm.

    PubMed

    Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu

    2017-04-15

    In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.

  20. Perceptual Color Characterization of Cameras

    PubMed Central

    Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo

    2014-01-01

    Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586

Top