Sample records for efficient procedure based

  1. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    NASA Astrophysics Data System (ADS)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  2. Evaluation of modal pushover-based scaling of one component of ground motion: Tall buildings

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2012-01-01

    Nonlinear response history analysis (RHA) is now increasingly used for performance-based seismic design of tall buildings. Required for nonlinear RHAs is a set of ground motions selected and scaled appropriately so that analysis results would be accurate (unbiased) and efficient (having relatively small dispersion). This paper evaluates accuracy and efficiency of recently developed modal pushover–based scaling (MPS) method to scale ground motions for tall buildings. The procedure presented explicitly considers structural strength and is based on the standard intensity measure (IM) of spectral acceleration in a form convenient for evaluating existing structures or proposed designs for new structures. Based on results presented for two actual buildings (19 and 52 stories, respectively), it is demonstrated that the MPS procedure provided a highly accurate estimate of the engineering demand parameters (EDPs), accompanied by significantly reduced record-to-record variability of the responses. In addition, the MPS procedure is shown to be superior to the scaling procedure specified in the ASCE/SEI 7-05 document.

  3. Assessment of the application of an ecotoxicological procedure to screen illicit toxic discharges in domestic septic tank sludge.

    PubMed

    López-Gastey, J; Choucri, A; Robidoux, P Y; Sunahara, G I

    2000-06-01

    An innovative screening procedure has been developed to detect illicit toxic discharges in domestic septic tank sludge hauled to the Montreal Urban Community waste-water treatment plant. This new means of control is based on an integrative approach, using bioassays and chemical analyses. Conservative criteria are applied to detect abnormal toxicity with great reliability while avoiding false positive results. The complementary data obtained from toxicity tests and chemical analyses support the use of this efficient and easy-to-apply procedure. This study assesses the control procedure in which 231 samples were analyzed over a 30-month period. Data clearly demonstrate the deterrent power of an efficient control procedure combined with a public awareness campaign among the carriers. In the first 15 months of application, between January 1996 and March 1997, approximately 30% of the 123 samples analyzed showed abnormal toxicity. Between April 1997 and June 1998, that is, after a public hearing presentation of this procedure, this proportion dropped significantly to approximately 9% based on 108 analyzed samples. The results of a 30-month application of this new control procedure show the superior efficiency of the ecotoxicological approach compared with the previously used chemical control procedure. To be able to apply it effectively and, if necessary, to apply the appropriate coercive measures, ecotoxicological criteria should be included in regulatory guidelines.

  4. Minimization of a Class of Matrix Trace Functions by Means of Refined Majorization.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; ten Berge, Jos M. F.

    1992-01-01

    A procedure is described for minimizing a class of matrix trace functions, which is a refinement of an earlier procedure for minimizing the class of matrix trace functions using majorization. Several trial analyses demonstrate that the revised procedure is more efficient than the earlier majorization-based procedure. (SLD)

  5. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  6. ELISA AND SOL-GEL BASED IMMUNOAFFINITY PURIFICATION OF THE PYRETHROID BIOALLETHRIN IN FOOD AND ENVIRONMENTAL SAMPLES

    EPA Science Inventory

    The peer-reviewed article describes the development of a new sol-gel based immunoaffinity purification procedure and an immunoassay for the pyrethroid bioallethrin. The immunoaffinity chromatography procedure was applied to food samples providing an efficient cleanup prior to im...

  7. Using Decision Procedures to Build Domain-Specific Deductive Synthesis Systems

    NASA Technical Reports Server (NTRS)

    VanBaalen, Jeffrey; Roach, Steven; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes a class of decision procedures that we have found useful for efficient, domain-specific deductive synthesis. These procedures are called closure-based ground literal satisfiability procedures. We argue that this is a large and interesting class of procedures and show how to interface these procedures to a theorem prover for efficient deductive synthesis. Finally, we describe some results we have observed from our implementation. Amphion/NAIF is a domain-specific, high-assurance software synthesis system. It takes an abstract specification of a problem in solar system mechanics, such as 'when will a signal sent from the Cassini spacecraft to Earth be blocked by the planet Saturn?', and automatically synthesizes a FORTRAN program to solve it.

  8. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  9. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.

    PubMed

    Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y

    2016-11-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.

  10. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling

    PubMed Central

    Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.

    2016-01-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739

  11. Intranet-based quality improvement documentation at the Veterans Affairs Maryland Health Care System.

    PubMed

    Borkowski, A; Lee, D H; Sydnor, D L; Johnson, R J; Rabinovitch, A; Moore, G W

    2001-01-01

    The Pathology and Laboratory Medicine Service of the Veterans Affairs Maryland Health Care System is inspected biannually by the College of American Pathologists (CAP). As of the year 2000, all documentation in the Anatomic Pathology Section is available to all staff through the VA Intranet. Signed, supporting paper documents are on file in the office of the department chair. For the year 2000 CAP inspection, inspectors conducted their document review by use of these Web-based documents, in which each CAP question had a hyperlink to the corresponding section of the procedure manual. Thus inspectors were able to locate the documents relevant to each question quickly and efficiently. The procedure manuals consist of 87 procedures for surgical pathology, 52 procedures for cytopathology, and 25 procedures for autopsy pathology. Each CAP question requiring documentation had from one to three hyperlinks to the corresponding section of the procedure manual. Intranet documentation allows for easier sharing among decentralized institutions and for centralized updates of the laboratory documentation. These documents can be upgraded to allow for multimedia presentations, including text search for key words, hyperlinks to other documents, and images, audio, and video. Use of Web-based documents can improve the efficiency of the inspection process.

  12. How many records should be used in ASCE/SEI-7 ground motion scaling procedure?

    USGS Publications Warehouse

    Reyes, Juan C.; Kalkan, Erol

    2012-01-01

    U.S. national building codes refer to the ASCE/SEI-7 provisions for selecting and scaling ground motions for use in nonlinear response history analysis of structures. Because the limiting values for the number of records in the ASCE/SEI-7 are based on engineering experience, this study examines the required number of records statistically, such that the scaled records provide accurate, efficient, and consistent estimates of “true” structural responses. Based on elastic–perfectly plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI-7 scaling procedure is applied to 480 sets of ground motions; the number of records in these sets varies from three to ten. As compared to benchmark responses, it is demonstrated that the ASCE/SEI-7 scaling procedure is conservative if fewer than seven ground motions are employed. Utilizing seven or more randomly selected records provides more accurate estimate of the responses. Selecting records based on their spectral shape and design spectral acceleration increases the accuracy and efficiency of the procedure.

  13. CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas

    2015-09-01

    Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less

  14. Line pilot perspectives on complexity of terminal instrument flight procedures

    DOT National Transportation Integrated Search

    2016-09-01

    Many new Performance Based Navigation (PBN) Instrument Flight Procedures (IFPs) are being developed as the United States transforms its airspace to improve safety and efficiency. Despite significant efforts to prepare for operational implementation o...

  15. The prediction of acoustical particle motion using an efficient polynomial curve fit procedure

    NASA Technical Reports Server (NTRS)

    Marshall, S. E.; Bernhard, R.

    1984-01-01

    A procedure is examined whereby the acoustic model parameters, natural frequencies and mode shapes, in the cavities of transportation vehicles are determined experimentally. The acoustic model shapes are described in terms of the particle motion. The acoustic modal analysis procedure is tailored to existing minicomputer based spectral analysis systems.

  16. Identifying Nonprovider Factors Affecting Pediatric Emergency Medicine Provider Efficiency.

    PubMed

    Saleh, Fareed; Breslin, Kristen; Mullan, Paul C; Tillett, Zachary; Chamberlain, James M

    2017-10-31

    The aim of this study was to create a multivariable model of standardized relative value units per hour by adjusting for nonprovider factors that influence efficiency. We obtained productivity data based on billing records measured in emergency relative value units for (1) both evaluation and management of visits and (2) procedures for 16 pediatric emergency medicine providers with more than 750 hours worked per year. Eligible shifts were in an urban, academic pediatric emergency department (ED) with 2 sites: a tertiary care main campus and a satellite community site. We used multivariable linear regression to adjust for the impact of shift and pediatric ED characteristics on individual-provider efficiency and then removed variables from the model with minimal effect on productivity. There were 2998 eligible shifts for the 16 providers during a 3-year period. The resulting model included 4 variables when looking at both ED sites combined. These variables include the following: (1) number of procedures billed by provider, (2) season of the year, (3) shift start time, and (4) day of week. Results were improved when we separately modeled each ED location. A 3-variable model using procedures billed by provider, shift start time, and season explained 23% of the variation in provider efficiency at the academic ED site. A 3-variable model using procedures billed by provider, patient arrivals per hour, and shift start time explained 45% of the variation in provider efficiency at the satellite ED site. Several nonprovider factors affect provider efficiency. These factors should be considered when designing productivity-based incentives.

  17. Fast Fragmentation of Networks Using Module-Based Attacks

    PubMed Central

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610

  18. Energy-efficiency program for clothes washers, clothes dryers, and dishwashers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-11-01

    The objectives of this study of dishwashers, clothes washers, and clothes dryers are: to evaluate existing energy efficiency test procedures and recommend the use of specific test procedures for each appliance group and to establish the maximum economically and technologically feasible energy-efficiency improvement goals for each appliance group. Specifically, the program requirements were to determine the energy efficiency of the 1972 models, to evaluate the feasibility improvements that could be implemented by 1980 to maximize energy efficiency, and to calculate the percentage efficiency improvement based on the 1972 baseline and the recommended 1980 targets. The test program was conducted usingmore » 5 dishwashers, 4 top-loading clothes washers, one front-loading clothes washer, 4 electric clothes dryers, and 4 gas clothes dryers. (MCW)« less

  19. Co-Creation Learning Procedures: Comparing Interactive Language Lessons for Deaf and Hearing Students.

    PubMed

    Hosono, Naotsune; Inoue, Hiromitsu; Tomita, Yutaka

    2017-01-01

    This paper discusses co-creation learning procedures of second language lessons for deaf students, and sign language lessons by a deaf lecturer. The analyses focus on the learning procedure and resulting assessment, considering the disability. Through questionnaires ICT-based co-creative learning technologies are effective and efficient and promote spontaneous learning motivation goals.

  20. Evaluating the efficiency of a zakat institution over a period of time using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Krishnan, Anath Rau; Hamzah, Ahmad Aizuddin

    2017-08-01

    It is crucial for a zakat institution to evaluate and understand how efficiently they have operated in the past, thus ideal strategies could be developed for future improvement. However, evaluating the efficiency of a zakat institution is actually a challenging process as it involves the presence of multiple inputs or/and outputs. This paper proposes a step-by-step procedure comprising two data envelopment analysis models, namely dual Charnes-Cooper-Rhodes and slack-based model to quantitatively measure the overall efficiency of a zakat institution over a period of time. The applicability of the proposed procedure was demonstrated by evaluating the efficiency of Pusat Zakat Sabah, Malaysia from the year of 2007 up to 2015 by treating each year as a decision making unit. Two inputs (i.e. number of staff and number of branches) and two outputs (i.e. total collection and total distribution) were used to measure the overall efficiency achieved each year. The causes of inefficiency and strategy for future improvement were discussed based on the results.

  1. An efficient numerical procedure for thermohydrodynamic analysis of cavitating bearings

    NASA Technical Reports Server (NTRS)

    Vijayaraghavan, D.

    1995-01-01

    An efficient and accurate numerical procedure to determine the thermo-hydrodynamic performance of cavitating bearings is described. This procedure is based on the earlier development of Elrod for lubricating films, in which the properties across the film thickness are determined at Lobatto points and their distributions are expressed by collocated polynomials. The cavitated regions and their boundaries are rigorously treated. Thermal boundary conditions at the surfaces, including heat dissipation through the metal to the ambient, are incorporated. Numerical examples are presented comparing the predictions using this procedure with earlier theoretical predictions and experimental data. With a few points across the film thickness and across the journal and the bearing in the radial direction, the temperature profile is very well predicted.

  2. Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.

    PubMed

    Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R

    2014-01-01

    After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.

  3. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  4. Clinical-scale validation of a new efficient procedure for cryopreservation of ex vivo expanded cord blood hematopoietic stem and progenitor cells.

    PubMed

    Duchez, Pascale; Rodriguez, Laura; Chevaleyre, Jean; De La Grange, Philippe Brunet; Ivanovic, Zoran

    2016-12-01

    Survival of ex vivo expanded hematopoietic stem cells (HSC) and progenitor cells is low with the standard cryopreservation procedure. We recently showed that the efficiency of cryopreservation of these cells may be greatly enhanced by adding a serum-free xeno-free culture medium (HP01 Macopharma), which improves the antioxidant and biochemical properties of the cryopreservation solution. Here we present the clinical-scale validation of this cryopreservation procedure. The hematopoietic cells expanded in clinical-scale cultures were cryopreserved applying the new HP01-based procedure. The viability, apoptosis rate and number of functional committed progenitors (methyl-cellulose colony forming cell test), short-term repopulating HSCs (primary recipient NSG mice) and long-term HSCs (secondary recipient NSG mice) were tested before and after thawing. The efficiency of clinical-scale procedure reproduced the efficiency of cryopreservation obtained earlier in miniature sample experiments. Furthermore, the full preservation of short- and long-term HSCs was obtained in clinical scale conditions. Because the results obtained in clinical-scale volume are comparable to our earlier results in miniature-scale cultures, the clinical-scale procedure should be considered validated. It allows cryopreservation of the whole ex vivo expanded culture content, conserving full short- and long-term HSC activity. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  5. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  6. An efficient cardiac mapping strategy for radiofrequency catheter ablation with active learning.

    PubMed

    Feng, Yingjing; Guo, Ziyan; Dong, Ziyang; Zhou, Xiao-Yun; Kwok, Ka-Wai; Ernst, Sabine; Lee, Su-Lin

    2017-07-01

    A major challenge in radiofrequency catheter ablation procedures is the voltage and activation mapping of the endocardium, given a limited mapping time. By learning from expert interventional electrophysiologists (operators), while also making use of an active-learning framework, guidance on performing cardiac voltage mapping can be provided to novice operators or even directly to catheter robots. A learning from demonstration (LfD) framework, based upon previous cardiac mapping procedures performed by an expert operator, in conjunction with Gaussian process (GP) model-based active learning, was developed to efficiently perform voltage mapping over right ventricles (RV). The GP model was used to output the next best mapping point, while getting updated towards the underlying voltage data pattern as more mapping points are taken. A regularized particle filter was used to keep track of the kernel hyperparameter used by GP. The travel cost of the catheter tip was incorporated to produce time-efficient mapping sequences. The proposed strategy was validated on a simulated 2D grid mapping task, with leave-one-out experiments on 25 retrospective datasets, in an RV phantom using the Stereotaxis Niobe ® remote magnetic navigation system, and on a tele-operated catheter robot. In comparison with an existing geometry-based method, regression error was reduced and was minimized at a faster rate over retrospective procedure data. A new method of catheter mapping guidance has been proposed based on LfD and active learning. The proposed method provides real-time guidance for the procedure, as well as a live evaluation of mapping sufficiency.

  7. Inferential Procedures for Correlation Coefficients Corrected for Attenuation.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1988-01-01

    A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)

  8. 40 CFR 265.1084 - Waste determination procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... biodegradation efficiency (Rbio) for a treated hazardous waste. (i) The fraction of organics biodegraded (Fbio... biodegradation efficiency, percent. Fbio = Fraction of organic biodegraded as determined in accordance with the... biodegradation rate (MRbio) for a treated hazardous waste. (i) The MRbio shall be determined based on results for...

  9. 40 CFR 265.1084 - Waste determination procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... biodegradation efficiency (Rbio) for a treated hazardous waste. (i) The fraction of organics biodegraded (Fbio... biodegradation efficiency, percent. Fbio = Fraction of organic biodegraded as determined in accordance with the... biodegradation rate (MRbio) for a treated hazardous waste. (i) The MRbio shall be determined based on results for...

  10. 40 CFR 265.1084 - Waste determination procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... biodegradation efficiency (Rbio) for a treated hazardous waste. (i) The fraction of organics biodegraded (Fbio... biodegradation efficiency, percent. Fbio = Fraction of organic biodegraded as determined in accordance with the... biodegradation rate (MRbio) for a treated hazardous waste. (i) The MRbio shall be determined based on results for...

  11. 40 CFR 265.1084 - Waste determination procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... biodegradation efficiency (Rbio) for a treated hazardous waste. (i) The fraction of organics biodegraded (Fbio... biodegradation efficiency, percent. Fbio = Fraction of organic biodegraded as determined in accordance with the... biodegradation rate (MRbio) for a treated hazardous waste. (i) The MRbio shall be determined based on results for...

  12. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  13. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  14. Development of an efficient procedure for calculating the aerodynamic effects of planform variation

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Geller, E. W.

    1981-01-01

    Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.

  15. g-force induced giant efficiency of nanoparticles internalization into living cells

    PubMed Central

    Ocampo, Sandra M.; Rodriguez, Vanessa; de la Cueva, Leonor; Salas, Gorka; Carrascosa, Jose. L.; Josefa Rodríguez, María; García-Romero, Noemí; Luis, Jose; Cuñado, F.; Camarero, Julio; Miranda, Rodolfo; Belda-Iniesta, Cristobal; Ayuso-Sacido, Angel

    2015-01-01

    Nanotechnology plays an increasingly important role in the biomedical arena. Iron oxide nanoparticles (IONPs)-labelled cells is one of the most promising approaches for a fast and reliable evaluation of grafted cells in both preclinical studies and clinical trials. Current procedures to label living cells with IONPs are based on direct incubation or physical approaches based on magnetic or electrical fields, which always display very low cellular uptake efficiencies. Here we show that centrifugation-mediated internalization (CMI) promotes a high uptake of IONPs in glioblastoma tumour cells, just in a few minutes, and via clathrin-independent endocytosis pathway. CMI results in controllable cellular uptake efficiencies at least three orders of magnitude larger than current procedures. Similar trends are found in human mesenchymal stem cells, thereby demonstrating the general feasibility of the methodology, which is easily transferable to any laboratory with great potential for the development of improved biomedical applications. PMID:26477718

  16. An efficient matrix-matrix multiplication based antisymmetric tensor contraction engine for general order coupled cluster.

    PubMed

    Hanrath, Michael; Engels-Putzka, Anna

    2010-08-14

    In this paper, we present an efficient implementation of general tensor contractions, which is part of a new coupled-cluster program. The tensor contractions, used to evaluate the residuals in each coupled-cluster iteration are particularly important for the performance of the program. We developed a generic procedure, which carries out contractions of two tensors irrespective of their explicit structure. It can handle coupled-cluster-type expressions of arbitrary excitation level. To make the contraction efficient without loosing flexibility, we use a three-step procedure. First, the data contained in the tensors are rearranged into matrices, then a matrix-matrix multiplication is performed, and finally the result is backtransformed to a tensor. The current implementation is significantly more efficient than previous ones capable of treating arbitrary high excitations.

  17. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  18. The efficiency of therapeutic erythrocytapheresis compared to phlebotomy: a mathematical tool for predicting response in hereditary hemochromatosis, polycythemia vera, and secondary erythrocytosis.

    PubMed

    Evers, Dorothea; Kerkhoffs, Jean-Louis; Van Egmond, Liane; Schipperus, Martin R; Wijermans, Pierre W

    2014-06-01

    Recently, therapeutic erythrocytapheresis (TE) was suggested to be more efficient in depletion of red blood cells (RBC) compared to manual phlebotomy in the treatment of hereditary hemochromatosis (HH), polycythemia vera (PV), and secondary erythrocytosis (SE). The efficiency rate (ER) of TE, that is, the increase in RBC depletion achieved with one TE cycle compared to one phlebotomy procedure, can be calculated based on estimated blood volume (BV), preprocedural hematocrit (Hct(B)), and delta-hematocrit (ΔHct). In a retrospective evaluation of 843 TE procedures (in 45 HH, 33 PV, and 40 SE patients) the mean ER was 1.86 ± 0.62 with the highest rates achieved in HH patients. An ER of 1.5 was not reached in 37.9% of all procedures mainly concerning patients with a BV below 4,500 ml. In 12 newly diagnosed homozygous HH patients, the induction phase duration was medially 38.4 weeks (medially 10.5 procedures). During the maintenance treatment of HH, PV, and SE, the interval between TE procedures was medially 13.4 weeks. This mathematical model can help select the proper treatment modality for the individual patient. Especially for patients with a large BV and high achievable ΔHct, TE appears to be more efficient than manual phlebotomy in RBC depletion thereby potentially reducing the numbers of procedures and expanding the interprocedural time period for HH, PV, and SE. © 2013 Wiley Periodicals, Inc.

  19. High efficiency endocrine operation protocol: From design to implementation.

    PubMed

    Mascarella, Marco A; Lahrichi, Nadia; Cloutier, Fabienne; Kleiman, Simcha; Payne, Richard J; Rosenberg, Lawrence

    2016-10-01

    We developed a high efficiency endocrine operative protocol based on a mathematical programming approach, process reengineering, and value-stream mapping to increase the number of operations completed per day without increasing operating room time at a tertiary-care, academic center. Using this protocol, a case-control study of 72 patients undergoing endocrine operation during high efficiency days were age, sex, and procedure-matched to 72 patients undergoing operation during standard days. The demographic profile, operative times, and perioperative complications were noted. The average number of cases per 8-hour workday in the high efficiency and standard operating rooms were 7 and 5, respectively. Mean procedure times in both groups were similar. The turnaround time (mean ± standard deviation) in the high efficiency group was 8.5 (±2.7) minutes as compared with 15.4 (±4.9) minutes in the standard group (P < .001). Transient postoperative hypocalcemia was 6.9% (5/72) and 8.3% (6/72) for the high efficiency and standard groups, respectively (P = .99). In this study, patients undergoing high efficiency endocrine operation had similar procedure times and perioperative complications compared with the standard group. The proposed high efficiency protocol seems to better utilize operative time and decrease the backlog of patients waiting for endocrine operation in a country with a universal national health care program. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A spin column-free approach to sodium hydroxide-based glycan permethylation.

    PubMed

    Hu, Yueming; Borges, Chad R

    2017-07-24

    Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues-yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based "glycan node" analysis results. When applied to blood plasma samples from stage III-IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens.

  1. A spin column-free approach to sodium hydroxide-based glycan permethylation†

    PubMed Central

    Hu, Yueming; Borges, Chad R.

    2018-01-01

    Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues—yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based “glycan node” analysis results. When applied to blood plasma samples from stage III–IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens. PMID:28635997

  2. Applied Comparative Effectiveness Researchers Must Measure Learning Rates: A Commentary on Efficiency Articles

    ERIC Educational Resources Information Center

    Skinner, Christopher H.

    2010-01-01

    Almost all academic skills deficits can be conceptualized as learning rate problems as students are not failing to learn, but not learning rapidly enough. Thus, when selecting among various possible remedial procedures, educators need an evidence base that indicates which procedure results in the greatest increases in learning rates. Previous…

  3. An experimental strategy validated to design cost-effective culture media based on response surface methodology.

    PubMed

    Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H

    2017-07-03

    For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.

  4. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  5. Evaluation of All-Day-Efficiency for selected flat plate and evacuated tube collectors

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An evaluation of all day efficiency for selected flat plate and evacuated tube collectors is presented. Computations are based on a modified version of the NBSIR 78-1305A procedure for all day efficiency. The ASHMET and NOAA data bases for solar insolation are discussed. Details of the algorithm used to convert total (global) horizontal radiation to the collector tilt plane of the selected sites are given along with tables and graphs which show the results of the tests performed during this evaluation.

  6. PSA discriminator influence on (222)Rn efficiency detection in waters by liquid scintillation counting.

    PubMed

    Stojković, Ivana; Todorović, Nataša; Nikolov, Jovana; Tenjović, Branislava

    2016-06-01

    A procedure for the (222)Rn determination in aqueous samples using liquid scintillation counting (LSC) was evaluated and optimized. Measurements were performed by ultra-low background spectrometer Quantulus 1220™ equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with (226)Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide precise spectra separation. Improvement of calibration procedure was done through investigation of PSA discriminator level and, consequentially, the activity of (226)Ra calibration standard influence on (222)Rn efficiency detection. Quench effects on generated spectra i.e. determination of radon efficiency detection were also investigated with quench calibration curve obtained. Radon determination in waters based on modified procedure according to the activity of (226)Ra standard used, dependent on PSA setup, was evaluated with prepared (226)Ra solution samples and drinking water samples with assessment of measurement uncertainty variation included. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  8. A web-based procedure for liver segmentation in CT images

    NASA Astrophysics Data System (ADS)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  9. Optimization of a gene electrotransfer procedure for efficient intradermal immunization with an hTERT-based DNA vaccine in mice

    PubMed Central

    Calvet, Christophe Y; Thalmensi, Jessie; Liard, Christelle; Pliquet, Elodie; Bestetti, Thomas; Huet, Thierry; Langlade-Demoyen, Pierre; Mir, Lluis M

    2014-01-01

    DNA vaccination consists in administering an antigen-encoding plasmid in order to trigger a specific immune response. This specific vaccine strategy is of particular interest to fight against various infectious diseases and cancer. Gene electrotransfer is the most efficient and safest non-viral gene transfer procedure and specific electrical parameters have been developed for several target tissues. Here, a gene electrotransfer protocol into the skin has been optimized in mice for efficient intradermal immunization against the well-known telomerase tumor antigen. First, the luciferase reporter gene was used to evaluate gene electrotransfer efficiency into the skin as a function of the electrical parameters and electrodes, either non-invasive or invasive. In a second time, these parameters were tested for their potency to generate specific cellular CD8 immune responses against telomerase epitopes. These CD8 T-cells were fully functional as they secreted IFNγ and were endowed with specific cytotoxic activity towards target cells. This simple and optimized procedure for efficient gene electrotransfer into the skin using the telomerase antigen is to be used in cancer patients for the phase 1 clinical evaluation of a therapeutic cancer DNA vaccine called INVAC-1. PMID:26015983

  10. Documentation for assessment of modal pushover-based scaling procedure for nonlinear response history analysis of "ordinary standard" bridges

    USGS Publications Warehouse

    Kalkan, Erol; Kwong, Neal S.

    2010-01-01

    The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground-motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case for the central United States), or when high-intensity records are needed (as is the case for San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure recently was developed to determine scale factors for a small number of records, such that the scaled records provide accurate and efficient estimates of 'true' median structural responses. The adjective 'accurate' refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective 'efficient' refers to the record-to-record variability of responses. Herein, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing 'ordinary standard' bridges typical of reinforced-concrete bridge construction in California. These bridges are the single-bent overpass, multi span bridge, curved-bridge, and skew-bridge. As compared to benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the responses. Thus, the MPS procedure is a useful tool for scaling ground motions as input to nonlinear RHAs of 'ordinary standard' bridges.

  11. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  12. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  13. Using Discrete Event Simulation to Model the Economic Value of Shorter Procedure Times on EP Lab Efficiency in the VALUE PVI Study.

    PubMed

    Kowalski, Marcin; DeVille, J Brian; Svinarich, J Thomas; Dan, Dan; Wickliffe, Andrew; Kantipudi, Charan; Foell, Jason D; Filardo, Giovanni; Holbrook, Reece; Baker, James; Baydoun, Hassan; Jenkins, Mark; Chang-Sing, Peter

    2016-05-01

    The VALUE PVI study demonstrated that atrial fibrillation (AF) ablation procedures and electrophysiology laboratory (EP lab) occupancy times were reduced for the cryoballoon compared with focal radiofrequency (RF) ablation. However, the economic impact associated with the cryoballoon procedure for hospitals has not been determined. Assess the economic value associated with shorter AF ablation procedure times based on VALUE PVI data. A model was formulated from data from the VALUE PVI study. This model used a discrete event simulation to translate procedural efficiencies into metrics utilized by hospital administrators. A 1000-day period was simulated to determine the accrued impact of procedure time on an institution's EP lab when considering staff and hospital resources. The simulation demonstrated that procedures performed with the cryoballoon catheter resulted in several efficiencies, including: (1) a reduction of 36.2% in days with overtime (422 days RF vs 60 days cryoballoon); (2) 92.7% less cumulative overtime hours (370 hours RF vs 27 hours cryoballoon); and (3) an increase of 46.7% in days with time for an additional EP lab usage (186 days RF vs 653 days cryoballoon). Importantly, the added EP lab utilization could not support the time required for an additional AF ablation procedure. The discrete event simulation of the VALUE PVI data demonstrates the potential positive economic value of AF ablation procedures using the cryoballoon. These benefits include more days where overtime is avoided, fewer cumulative overtime hours, and more days with time left for additional usage of EP lab resources.

  14. 10 CFR 431.16 - Test procedures for the measurement of energy efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Test procedures for the measurement of energy efficiency. 431.16 Section 431.16 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... Methods of Determining Efficiency § 431.16 Test procedures for the measurement of energy efficiency. For...

  15. 10 CFR 431.16 - Test procedures for the measurement of energy efficiency.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Test procedures for the measurement of energy efficiency. 431.16 Section 431.16 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... Methods of Determining Efficiency § 431.16 Test procedures for the measurement of energy efficiency. For...

  16. Value-based Proposition for a Dedicated Interventional Pulmonology Suite: an Adaptable Business Model.

    PubMed

    Desai, Neeraj R; French, Kim D; Diamond, Edward; Kovitz, Kevin L

    2018-05-31

    Value-based care is evolving with a focus on improving efficiency, reducing cost, and enhancing the patient experience. Interventional pulmonology has the opportunity to lead an effective value-based care model. This model is supported by the relatively low cost of pulmonary procedures and has the potential to improve efficiencies in thoracic care. We discuss key strategies to evaluate and improve efficiency in Interventional Pulmonology practice and describe our experience in developing an interventional pulmonology suite. Such a model can be adapted to other specialty areas and may encourage a more coordinated approach to specialty care. Copyright © 2018. Published by Elsevier Inc.

  17. Constructing Self-Modeling Videos: Procedures and Technology

    ERIC Educational Resources Information Center

    Collier-Meek, Melissa A.; Fallon, Lindsay M.; Johnson, Austin H.; Sanetti, Lisa M. H.; Delcampo, Marisa A.

    2012-01-01

    Although widely recommended, evidence-based interventions are not regularly utilized by school practitioners. Video self-modeling is an effective and efficient evidence-based intervention for a variety of student problem behaviors. However, like many other evidence-based interventions, it is not frequently used in schools. As video creation…

  18. 76 FR 47178 - Energy Efficiency Program: Test Procedure for Lighting Systems (Luminaires)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ...: Test Procedure for Lighting Systems (Luminaires) AGENCY: Office of Energy Efficiency and Renewable... (``DOE'' or the ``Department'') is currently evaluating energy efficiency test procedures for luminaires... products. DOE recognizes that well-designed test procedures are important to produce reliable, repeatable...

  19. Multistate Evaluation of an Ultrafiltration-Based Procedure for Simultaneous Recovery of Enteric Microbes in 100-Liter Tap Water Samples▿

    PubMed Central

    Hill, Vincent R.; Kahler, Amy M.; Jothikumar, Narayanan; Johnson, Trisha B.; Hahn, Donghyun; Cromeans, Theresa L.

    2007-01-01

    Ultrafiltration (UF) is increasingly being recognized as a potentially effective procedure for concentrating and recovering microbes from large volumes of water and treated wastewater. Because of their very small pore sizes, UF membranes are capable of simultaneously concentrating viruses, bacteria, and parasites based on size exclusion. In this study, a UF-based water sampling procedure was used to simultaneously recover representatives of these three microbial classes seeded into 100-liter samples of tap water collected from eight cities covering six hydrologic areas of the United States. The UF-based procedure included hollow-fiber UF as the primary step for concentrating microbes and then used membrane filtration for bacterial culture assays, immunomagnetic separation for parasite recovery and quantification, and centrifugal UF for secondary concentration of viruses. Water samples were tested for nine water quality parameters to investigate whether water quality data correlated with measured recovery efficiencies and molecular detection levels. Average total method recovery efficiencies were 71, 97, 120, 110, and 91% for φX174 bacteriophage, MS2 bacteriophage, Enterococcus faecalis, Clostridium perfringens spores, and Cryptosporidium parvum oocysts, respectively. Real-time PCR and reverse transcription-PCR (RT-PCR) for seeded microbes and controls indicated that tap water quality could affect the analytical performance of molecular amplification assays, although no specific water quality parameter was found to correlate with reduced PCR or RT-PCR performance. PMID:17483281

  20. Beyond Compliance: How Do Your School Business Operations Measure Up?

    ERIC Educational Resources Information Center

    Rowman & Littlefield Education, 2005

    2005-01-01

    This handbook was developed as a means for self-assessment to assist school business officials in determining the efficiency of the business office's planning, procedures, and operations. It does not include every possible procedure or task performed in the business office. It is intended, rather, as a broad-based checklist of those operations and…

  1. Operative record using intraoperative digital data in neurosurgery.

    PubMed

    Houkin, K; Kuroda, S; Abe, H

    2000-01-01

    The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.

  2. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  3. Numerical difficulties and computational procedures for thermo-hydro-mechanical coupled problems of saturated porous media

    NASA Astrophysics Data System (ADS)

    Simoni, L.; Secchi, S.; Schrefler, B. A.

    2008-12-01

    This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.

  4. Cost Analysis of an Office-based Surgical Suite

    PubMed Central

    LaBove, Gabrielle

    2016-01-01

    Introduction: Operating costs are a significant part of delivering surgical care. Having a system to analyze these costs is imperative for decision making and efficiency. We present an analysis of surgical supply, labor and administrative costs, and remuneration of procedures as a means for a practice to analyze their cost effectiveness; this affects the quality of care based on the ability to provide services. The costs of surgical care cannot be estimated blindly as reconstructive and cosmetic procedures have different percentages of overhead. Methods: A detailed financial analysis of office-based surgical suite costs for surgical procedures was determined based on company contract prices and average use of supplies. The average time spent on scheduling, prepping, and doing the surgery was factored using employee rates. Results: The most expensive, minor procedure supplies are suture needles. The 4 most common procedures from the most expensive to the least are abdominoplasty, breast augmentation, facelift, and lipectomy. Conclusions: Reconstructive procedures require a greater portion of collection to cover costs. Without the adjustment of both patient and insurance remuneration in the practice, the ability to provide quality care will be increasingly difficult. PMID:27536482

  5. "Knife to skin" time is a poor marker of operating room utilization and efficiency in cardiac surgery.

    PubMed

    Luthra, Suvitesh; Ramady, Omar; Monge, Mary; Fitzsimons, Michael G; Kaleta, Terry R; Sundt, Thoralf M

    2015-06-01

    Markers of operation room (OR) efficiency in cardiac surgery are focused on "knife to skin" and "start time tardiness." These do not evaluate the middle and later parts of the cardiac surgical pathway. The purpose of this analysis was to evaluate knife to skin time as an efficiency marker in cardiac surgery. We looked at knife to skin time, procedure time, and transfer times in the cardiac operational pathway for their correlation with predefined indices of operational efficiency (Index of Operation Efficiency - InOE, Surgical Index of Operational Efficiency - sInOE). A regression analysis was performed to test the goodness of fit of the regression curves estimated for InOE relative to the times on the operational pathway. The mean knife to skin time was 90.6 ± 13 minutes (23% of total OR time). The mean procedure time was 282 ± 123 minutes (71% of total OR time). Utilization efficiencies were highest for aortic valve replacement and coronary artery bypass grafting and least for complex aortic procedures. There were no significant procedure-specific or team-specific differences for standard procedures. Procedure times correlated the strongest with InOE (r = -0.98, p < 0.01). Compared to procedure times, knife to skin is not as strong an indicator of efficiency. A statistically significant linear dependence on InOE was observed with "procedure times" only. Procedure times are a better marker of OR efficiency than knife to skin in cardiac cases. Strategies to increase OR utilization and efficiency should address procedure times in addition to knife to skin times. © 2015 Wiley Periodicals, Inc.

  6. Acquisition of Motor and Cognitive Skills through Repetition in Typically Developing Children

    PubMed Central

    Magallón, Sara; Narbona, Juan; Crespo-Eguílaz, Nerea

    2016-01-01

    Background Procedural memory allows acquisition, consolidation and use of motor skills and cognitive routines. Automation of procedures is achieved through repeated practice. In children, improvement in procedural skills is a consequence of natural neurobiological development and experience. Methods The aim of the present research was to make a preliminary evaluation and description of repetition-based improvement of procedures in typically developing children (TDC). Ninety TDC children aged 6–12 years were asked to perform two procedural learning tasks. In an assembly learning task, which requires predominantly motor skills, we measured the number of assembled pieces in 60 seconds. In a mirror drawing learning task, which requires more cognitive functions, we measured time spent and efficiency. Participants were tested four times for each task: three trials were consecutive and the fourth trial was performed after a 10-minute nonverbal interference task. The influence of repeated practice on performance was evaluated by means of the analysis of variance with repeated measures and the paired-sample test. Correlation coefficients and simple linear regression test were used to examine the relationship between age and performance. Results TDC achieved higher scores in both tasks through repetition. Older children fitted more pieces than younger ones in assembling learning and they were faster and more efficient at the mirror drawing learning task. Conclusions These findings indicate that three consecutive trials at a procedural task increased speed and efficiency, and that age affected basal performance in motor-cognitive procedures. PMID:27384671

  7. Acquisition of Motor and Cognitive Skills through Repetition in Typically Developing Children.

    PubMed

    Magallón, Sara; Narbona, Juan; Crespo-Eguílaz, Nerea

    2016-01-01

    Procedural memory allows acquisition, consolidation and use of motor skills and cognitive routines. Automation of procedures is achieved through repeated practice. In children, improvement in procedural skills is a consequence of natural neurobiological development and experience. The aim of the present research was to make a preliminary evaluation and description of repetition-based improvement of procedures in typically developing children (TDC). Ninety TDC children aged 6-12 years were asked to perform two procedural learning tasks. In an assembly learning task, which requires predominantly motor skills, we measured the number of assembled pieces in 60 seconds. In a mirror drawing learning task, which requires more cognitive functions, we measured time spent and efficiency. Participants were tested four times for each task: three trials were consecutive and the fourth trial was performed after a 10-minute nonverbal interference task. The influence of repeated practice on performance was evaluated by means of the analysis of variance with repeated measures and the paired-sample test. Correlation coefficients and simple linear regression test were used to examine the relationship between age and performance. TDC achieved higher scores in both tasks through repetition. Older children fitted more pieces than younger ones in assembling learning and they were faster and more efficient at the mirror drawing learning task. These findings indicate that three consecutive trials at a procedural task increased speed and efficiency, and that age affected basal performance in motor-cognitive procedures.

  8. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  9. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  10. Curriculum-Based Measurement, Program Development, Graphing Performance and Increasing Efficiency.

    ERIC Educational Resources Information Center

    Deno, Stanley L.; And Others

    1987-01-01

    Four brief articles look at aspects of curriculum based measurement (CBM) for academically handicapped students including procedures of CBM with examples, different approaches to graphing student performance, and solutions to the problem of making time to measure student progress frequently. (DB)

  11. Poly(ethyleneoxide) functionalization through alkylation

    DOEpatents

    Sivanandan, Kulandaivelu; Eitouni, Hany Basam; Li, Yan; Pratt, Russell Clayton

    2015-04-21

    A new and efficient method of functionalizing high molecular weight polymers through alkylation using a metal amide base is described. This novel procedure can also be used to synthesize polymer-based macro-initiators containing radical initiating groups at the chain-ends for synthesis of block copolymers.

  12. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    PubMed

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  13. Emerging Opportunities for School Psychologists to Enhance our Remediation Procedure Evidence Base as We Apply Response to Intervention

    ERIC Educational Resources Information Center

    Skinner, Christopher H.; McCleary, Daniel F.; Skolits, Gary L.; Poncy, Brian C.; Cates, Gary L.

    2013-01-01

    The success of Response-to-Intervention (RTI) and similar models of service delivery is dependent on educators being able to apply effective and efficient remedial procedures. In the process of implementing problem-solving RTI models, school psychologists have an opportunity to contribute to and enhance the quality of our remedial-procedure…

  14. Efficient Site-Specific Labeling of Proteins via Cysteines

    PubMed Central

    Kim, Younggyu; Ho, Sam O.; Gassman, Natalie R.; Korlann, You; Landorf, Elizabeth V.; Collart, Frank R.; Weiss, Shimon

    2011-01-01

    Methods for chemical modifications of proteins have been crucial for the advancement of proteomics. In particular, site-specific covalent labeling of proteins with fluorophores and other moieties has permitted the development of a multitude of assays for proteome analysis. A common approach for such a modification is solvent-accessible cysteine labeling using thiol-reactive dyes. Cysteine is very attractive for site-specific conjugation due to its relative rarity throughout the proteome and the ease of its introduction into a specific site along the protein's amino acid chain. This is achieved by site-directed mutagenesis, most often without perturbing the protein's function. Bottlenecks in this reaction, however, include the maintenance of reactive thiol groups without oxidation before the reaction, and the effective removal of unreacted molecules prior to fluorescence studies. Here, we describe an efficient, specific, and rapid procedure for cysteine labeling starting from well-reduced proteins in the solid state. The efficacy and specificity of the improved procedure are estimated using a variety of single-cysteine proteins and thiol-reactive dyes. Based on UV/vis absorbance spectra, coupling efficiencies are typically in the range 70–90%, and specificities are better than ~95%. The labeled proteins are evaluated using fluorescence assays, proving that the covalent modification does not alter their function. In addition to maleimide-based conjugation, this improved procedure may be used for other thiol-reactive conjugations such as haloacetyl, alkyl halide, and disulfide interchange derivatives. This facile and rapid procedure is well suited for high throughput proteome analysis. PMID:18275130

  15. Efficient site-specific labeling of proteins via cysteines.

    PubMed

    Kim, Younggyu; Ho, Sam O; Gassman, Natalie R; Korlann, You; Landorf, Elizabeth V; Collart, Frank R; Weiss, Shimon

    2008-03-01

    Methods for chemical modifications of proteins have been crucial for the advancement of proteomics. In particular, site-specific covalent labeling of proteins with fluorophores and other moieties has permitted the development of a multitude of assays for proteome analysis. A common approach for such a modification is solvent-accessible cysteine labeling using thiol-reactive dyes. Cysteine is very attractive for site-specific conjugation due to its relative rarity throughout the proteome and the ease of its introduction into a specific site along the protein's amino acid chain. This is achieved by site-directed mutagenesis, most often without perturbing the protein's function. Bottlenecks in this reaction, however, include the maintenance of reactive thiol groups without oxidation before the reaction, and the effective removal of unreacted molecules prior to fluorescence studies. Here, we describe an efficient, specific, and rapid procedure for cysteine labeling starting from well-reduced proteins in the solid state. The efficacy and specificity of the improved procedure are estimated using a variety of single-cysteine proteins and thiol-reactive dyes. Based on UV/vis absorbance spectra, coupling efficiencies are typically in the range 70-90%, and specificities are better than approximately 95%. The labeled proteins are evaluated using fluorescence assays, proving that the covalent modification does not alter their function. In addition to maleimide-based conjugation, this improved procedure may be used for other thiol-reactive conjugations such as haloacetyl, alkyl halide, and disulfide interchange derivatives. This facile and rapid procedure is well suited for high throughput proteome analysis.

  16. 10 CFR 431.444 - Test procedures for the measurement of energy efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Test procedures for the measurement of energy efficiency. 431.444 Section 431.444 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... procedures for the measurement of energy efficiency. (a) Scope. Pursuant to section 346(b)(1) of EPCA, this...

  17. 10 CFR 431.444 - Test procedures for the measurement of energy efficiency.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Test procedures for the measurement of energy efficiency. 431.444 Section 431.444 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR... procedures for the measurement of energy efficiency. (a) Scope. Pursuant to section 346(b)(1) of EPCA, this...

  18. Examination of efficacious, efficient, and socially valid error-correction procedures to teach sight words and prepositions to children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob

    2016-09-01

    Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.

  19. Possible overestimation of surface disinfection efficiency by assessment methods based on liquid sampling procedures as demonstrated by in situ quantification of spore viability.

    PubMed

    Grand, I; Bellon-Fontaine, M-N; Herry, J-M; Hilaire, D; Moriconi, F-X; Naïtali, M

    2011-09-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the "damaged/undamaged" status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures.

  20. Possible Overestimation of Surface Disinfection Efficiency by Assessment Methods Based on Liquid Sampling Procedures as Demonstrated by In Situ Quantification of Spore Viability ▿

    PubMed Central

    Grand, I.; Bellon-Fontaine, M.-N.; Herry, J.-M.; Hilaire, D.; Moriconi, F.-X.; Naïtali, M.

    2011-01-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the “damaged/undamaged” status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures. PMID:21742922

  1. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  2. Efficiency Benefits Using the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Swenson, Harry N.; Lin, Paul; Seo, Anthony Y.; Bagasol, Leonard N.

    2011-01-01

    NASA has developed a capability for terminal area precision scheduling and spacing (TAPSS) to increase the use of fuel-efficient arrival procedures during periods of traffic congestion at a high-density airport. Sustained use of fuel-efficient procedures throughout the entire arrival phase of flight reduces overall fuel burn, greenhouse gas emissions and noise pollution. The TAPSS system is a 4D trajectory-based strategic planning and control tool that computes schedules and sequences for arrivals to facilitate optimal profile descents. This paper focuses on quantifying the efficiency benefits associated with using the TAPSS system, measured by reduction of level segments during aircraft descent and flight distance and time savings. The TAPSS system was tested in a series of human-in-the-loop simulations and compared to current procedures. Compared to the current use of the TMA system, simulation results indicate a reduction of total level segment distance by 50% and flight distance and time savings by 7% in the arrival portion of flight (200 nm from the airport). The TAPSS system resulted in aircraft maintaining continuous descent operations longer and with more precision, both achieved under heavy traffic demand levels.

  3. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  4. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  5. Evaluation of Flight Deck-Based Interval Management Crew Procedure Feasibility

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Murdoch, Jennifer L.; Hubbs, Clay E.; Swieringa, Kurt A.

    2013-01-01

    Air traffic demand is predicted to increase over the next 20 years, creating a need for new technologies and procedures to support this growth in a safe and efficient manner. The National Aeronautics and Space Administration's (NASA) Air Traffic Management Technology Demonstration - 1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The integration of these technologies will increase throughput, reduce delay, conserve fuel, and minimize environmental impacts. The ground-based tools include Traffic Management Advisor with Terminal Metering for precise time-based scheduling and Controller Managed Spacing decision support tools for better managing aircraft delay with speed control. The core airborne technology in ATD-1 is Flight deck-based Interval Management (FIM). FIM tools provide pilots with speed commands calculated using information from Automatic Dependent Surveillance - Broadcast. The precise merging and spacing enabled by FIM avionics and flight crew procedures will reduce excess spacing buffers and result in higher terminal throughput. This paper describes a human-in-the-loop experiment designed to assess the acceptability and feasibility of the ATD-1 procedures used in a voice communications environment. This experiment utilized the ATD-1 integrated system of ground-based and airborne technologies. Pilot participants flew a high-fidelity fixed base simulator equipped with an airborne spacing algorithm and a FIM crew interface. Experiment scenarios involved multiple air traffic flows into the Dallas-Fort Worth Terminal Radar Control airspace. Results indicate that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/- five seconds and the delivery precision was less than five seconds. Furthermore, FIM speed commands occurred at a rate of less than one per minute, and pilots found the frequency of the speed commands to be acceptable at all times throughout the experiment scenarios.

  6. Transition to Office-based Obstetric and Gynecologic Procedures: Safety, Technical, and Financial Considerations.

    PubMed

    Peacock, Lisa M; Thomassee, May E; Williams, Valerie L; Young, Amy E

    2015-06-01

    Office-based surgery is increasingly desired by patients and providers due to ease of access, overall efficiency, reimbursement, and satisfaction. The adoption of office-based surgery requires careful consideration of safety, efficacy, cost, and feasibility within a providers practice. This article reviews the currently available data regarding patient and provider satisfaction as well as practical considerations of staffing, equipment, and supplies. To aid the practitioner, issues of office-based anesthesia and safety with references to currently available national guidelines and protocols are provided. Included is a brief review of billing, coding, and reimbursement. Technical procedural aspects with information and recommendations are summarized.

  7. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  8. Implementation of electronic logbook for trainees of general surgery in Thailand.

    PubMed

    Aphinives, Potchavit

    2013-01-01

    All trainees are required to keep a record of their surgical skill and experiences throughout the trainingperiod in a logbook format. Paper-based logbook has several limitations. Therefore, an electronic logbook was introduced to replace the paper-based logbook. An electronic logbook program was developed in November 2005. This program was designed as web-based application based upon PHP scripts beneath Apache web server and MySQL database implementation. Only simpliJfied and essential data, such as hospital number diagnosis, surgical procedure, and pathological findings, etc. are recorded. The electronic logbook databases between Academic year 2006 and 2011 were analyzed. The annual recordedsurgical procedures gradually increasedfrom 41,214 procedures in 2006 to 66,643 procedures in 2011. Around one-third of all records were not verified by attending staffs, i.e. 27.59% (2006), 31.69% (2007), 18.06% (2008), 28.42% (2009), 30.18% (2010), and 31.41% (2011). On the Education year 2011, the three most common procedural groups included colon, rectum & anus group, appendix group, and vascular group, respectively. Advantages of the electronic logbook included more efficient data access, increased ability to monitor trainees and trainers, and analysis of procedural varieties among the training institutes.

  9. Compression-RSA technique: A more efficient encryption-decryption procedure

    NASA Astrophysics Data System (ADS)

    Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah

    2014-06-01

    The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.

  10. PHYSICO2: an UNIX based standalone procedure for computation of physicochemical, window-dependent and substitution based evolutionary properties of protein sequences along with automated block preparation tool, version 2.

    PubMed

    Banerjee, Shyamashree; Gupta, Parth Sarthi Sen; Nayek, Arnab; Das, Sunit; Sur, Vishma Pratap; Seth, Pratyay; Islam, Rifat Nawaz Ul; Bandyopadhyay, Amal K

    2015-01-01

    Automated genome sequencing procedure is enriching the sequence database very fast. To achieve a balance between the entry of sequences in the database and their analyses, efficient software is required. In this end PHYSICO2, compare to earlier PHYSICO and other public domain tools, is most efficient in that it i] extracts physicochemical, window-dependent and homologousposition-based-substitution (PWS) properties including positional and BLOCK-specific diversity and conservation, ii] provides users with optional-flexibility in setting relevant input-parameters, iii] helps users to prepare BLOCK-FASTA-file by the use of Automated Block Preparation Tool of the program, iv] performs fast, accurate and user-friendly analyses and v] redirects itemized outputs in excel format along with detailed methodology. The program package contains documentation describing application of methods. Overall the program acts as efficient PWS-analyzer and finds application in sequence-bioinformatics. PHYSICO2: is freely available at http://sourceforge.net/projects/physico2/ along with its documentation at https://sourceforge.net/projects/physico2/files/Documentation.pdf/download for all users.

  11. PHYSICO2: an UNIX based standalone procedure for computation of physicochemical, window-dependent and substitution based evolutionary properties of protein sequences along with automated block preparation tool, version 2

    PubMed Central

    Banerjee, Shyamashree; Gupta, Parth Sarthi Sen; Nayek, Arnab; Das, Sunit; Sur, Vishma Pratap; Seth, Pratyay; Islam, Rifat Nawaz Ul; Bandyopadhyay, Amal K

    2015-01-01

    Automated genome sequencing procedure is enriching the sequence database very fast. To achieve a balance between the entry of sequences in the database and their analyses, efficient software is required. In this end PHYSICO2, compare to earlier PHYSICO and other public domain tools, is most efficient in that it i] extracts physicochemical, window-dependent and homologousposition-based-substitution (PWS) properties including positional and BLOCK-specific diversity and conservation, ii] provides users with optional-flexibility in setting relevant input-parameters, iii] helps users to prepare BLOCK-FASTA-file by the use of Automated Block Preparation Tool of the program, iv] performs fast, accurate and user-friendly analyses and v] redirects itemized outputs in excel format along with detailed methodology. The program package contains documentation describing application of methods. Overall the program acts as efficient PWS-analyzer and finds application in sequence-bioinformatics. Availability PHYSICO2: is freely available at http://sourceforge.net/projects/physico2/ along with its documentation at https://sourceforge.net/projects/physico2/files/Documentation.pdf/download for all users. PMID:26339154

  12. 10 CFR 431.21 - Procedures for recognition and withdrawal of recognition of accreditation bodies and...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Electric Motors Test Procedures, Materials Incorporated and Methods of Determining Efficiency § 431.21 Procedures... Assistant Secretary for Energy Efficiency and Renewable Energy, U.S. Department of Energy, Forrestal...

  13. 75 FR 71596 - Energy Efficiency Program for Certain Commercial and Industrial Equipment: Test Procedures for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... Efficiency Program for Certain Commercial and Industrial Equipment: Test Procedures for Commercial Refrigeration Equipment AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION... amendments to its test procedure for commercial refrigeration equipment (CRE). The amendments would update...

  14. The NASA Constellation Program Procedure System

    NASA Technical Reports Server (NTRS)

    Phillips, Robert G.; Wang, Lui

    2010-01-01

    NASA has used procedures to describe activities to be performed onboard vehicles by astronaut crew and on the ground by flight controllers since Apollo. Starting with later Space Shuttle missions and the International Space Station, NASA moved forward to electronic presentation of procedures. For the Constellation Program, another large step forward is being taken - to make procedures more interactive with the vehicle and to assist the crew in controlling the vehicle more efficiently and with less error. The overall name for the project is the Constellation Procedure Applications Software System (CxPASS). This paper describes some of the history behind this effort, the key concepts and operational paradigms that the work is based upon, and the actual products being developed to implement procedures for Constellation

  15. Improving liquid chromatography-tandem mass spectrometry determinations by modifying noise frequency spectrum between two consecutive wavelet-based low-pass filtering procedures.

    PubMed

    Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien

    2010-04-23

    This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Performance-Based Funding: State Policy Influences on Small Rural Community Colleges

    ERIC Educational Resources Information Center

    Thornton, Zoë Mercedes; Friedel, Janice Nahra

    2016-01-01

    Performance-based funding (PBF) models intend to increase efficiency and productivity of the institution, thereby influencing organizational change. This change may be structural, programmatic, or procedural, and it may affect institutional practice and/or policy. The purpose of this qualitative case study was to understand the organizational…

  17. An optimized protocol for DNA extraction in plants with a high content of secondary metabolites, based on leaves of Mimosa tenuiflora (Willd.) Poir. (Leguminosae).

    PubMed

    Arruda, S R; Pereira, D G; Silva-Castro, M M; Brito, M G; Waldschmidt, A M

    2017-07-06

    Some species are characterized by a high content of tannins, alkaloids, and phenols in their leaves. These secondary metabolites are released during DNA extraction and might hinder molecular studies based on PCR (polymerase chain reaction). To provide an efficient method to extract DNA, Mimosa tenuiflora, an important leguminous plant from Brazilian semiarid region used in popular medicine and as a source of fuelwood or forage, was used. Eight procedures previously reported for plants were tested and adapted from leaf tissues of M. tenuiflora stored at -20°C. The optimized procedure in this study encompassed the utilization of phenol during deproteinization, increased concentrations of cetyltrimethylammonium bromide and sodium chloride, and a shorter period and lower temperature of incubation concerning other methods. The extracted DNA did not present degradation, and amplification via PCR was successful using ISSR, trnL, ITS, and ETS primers. Besides M. tenuiflora, this procedure was also tested and proved to be efficient in genetic studies of other plant species.

  18. Human Factors and Ergonomics for the Dental Profession.

    PubMed

    Ross, Al

    2016-09-01

    This paper proposes that the science of Human Factors and Ergonomics (HFE) is suitable for wide application in dental education, training and practice to improve safety, quality and efficiency. Three areas of interest are highlighted. First it is proposed that individual and team Non-Technical Skills (NTS), such as communication, leadership and stress management can improve error rates and efficiency of procedures. Secondly, in a physically and technically challenging environment, staff can benefit from ergonomic principles which examine design in supporting safe work. Finally, examination of organizational human factors can help anticipate stressors and plan for flexible responses to multiple, variable demands, and fluctuating resources. Clinical relevance: HFE is an evidence-based approach to reducing error rates and procedural complications, and avoiding problems associated with stress and fatigue. Improved teamwork and organizational planning and efficiency can impact directly on patient outcomes.

  19. Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.

    PubMed

    Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G

    2012-04-01

    The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.

  20. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  1. Multi-objective community detection based on memetic algorithm.

    PubMed

    Wu, Peng; Pan, Li

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.

  2. 40 CFR 63.752 - Recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...

  3. 40 CFR 63.752 - Recordkeeping requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...

  4. 40 CFR 63.752 - Recordkeeping requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...

  5. 40 CFR 63.752 - Recordkeeping requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...

  6. 40 CFR 63.752 - Recordkeeping requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... efficiency of the control system (as determined using the procedures specified in § 63.750(h)) and all test... adsorber: (i) The overall control efficiency of the control system (as determined using the procedures... overall control efficiency of the control system (as determined using the procedures specified in § 63.750...

  7. Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis

    NASA Astrophysics Data System (ADS)

    Střelec, Luboš

    2011-09-01

    The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.

  8. Comparative evaluation of rRNA depletion procedures for the improved analysis of bacterial biofilm and mixed pathogen culture transcriptomes

    PubMed Central

    Petrova, Olga E.; Garcia-Alcalde, Fernando; Zampaloni, Claudia; Sauer, Karin

    2017-01-01

    Global transcriptomic analysis via RNA-seq is often hampered by the high abundance of ribosomal (r)RNA in bacterial cells. To remove rRNA and enrich coding sequences, subtractive hybridization procedures have become the approach of choice prior to RNA-seq, with their efficiency varying in a manner dependent on sample type and composition. Yet, despite an increasing number of RNA-seq studies, comparative evaluation of bacterial rRNA depletion methods has remained limited. Moreover, no such study has utilized RNA derived from bacterial biofilms, which have potentially higher rRNA:mRNA ratios and higher rRNA carryover during RNA-seq analysis. Presently, we evaluated the efficiency of three subtractive hybridization-based kits in depleting rRNA from samples derived from biofilm, as well as planktonic cells of the opportunistic human pathogen Pseudomonas aeruginosa. Our results indicated different rRNA removal efficiency for the three procedures, with the Ribo-Zero kit yielding the highest degree of rRNA depletion, which translated into enhanced enrichment of non-rRNA transcripts and increased depth of RNA-seq coverage. The results indicated that, in addition to improving RNA-seq sensitivity, efficient rRNA removal enhanced detection of low abundance transcripts via qPCR. Finally, we demonstrate that the Ribo-Zero kit also exhibited the highest efficiency when P. aeruginosa/Staphylococcus aureus co-culture RNA samples were tested. PMID:28117413

  9. Selective isolation of gonyautoxins 1,4 from the dinoflagellate Alexandrium minutum based on molecularly imprinted solid-phase extraction.

    PubMed

    Lian, Ziru; Wang, Jiangtao

    2017-09-15

    Gonyautoxins 1,4 (GTX1,4) from Alexandrium minutum samples were isolated selectively and recognized specifically by an innovative and effective extraction procedure based on molecular imprinting technology. Novel molecularly imprinted polymer microspheres (MIPMs) were prepared by double-templated imprinting strategy using caffeine and pentoxifylline as dummy templates. The synthesized polymers displayed good affinity to GTX1,4 and were applied as sorbents. Further, an off-line molecularly imprinted solid-phase extraction (MISPE) protocol was optimized and an effective approach based on the MISPE coupled with HPLC-FLD was developed for selective isolation of GTX1,4 from the cultured A. minutum samples. The separation method showed good extraction efficiency (73.2-81.5%) for GTX1,4 and efficient removal of interferences matrices was also achieved after the MISPE process for the microalgal samples. The outcome demonstrated the superiority and great potential of the MISPE procedure for direct separation of GTX1,4 from marine microalgal extracts. Copyright © 2017. Published by Elsevier Ltd.

  10. Design of an integrated thermoelectric generator power converter for ultra-low power and low voltage body energy harvesters aimed at ExG active electrodes

    NASA Astrophysics Data System (ADS)

    Ataei, Milad; Robert, Christian; Boegli, Alexis; Farine, Pierre-André

    2015-10-01

    This paper describes a detailed design procedure for an efficient thermal body energy harvesting integrated power converter. The procedure is based on the examination of power loss and power transfer in a converter for a self-powered medical device. The efficiency limit for the system is derived and the converter is optimized for the worst case scenario. All optimum system parameters are calculated respecting the transducer constraints and the application form factor. Circuit blocks including pulse generators are implemented based on the system specifications and optimized converter working frequency. At this working condition, it has been demonstrated that the wide area capacitor of the voltage doubler, which provides high voltage switch gating, can be eliminated at the expense of wider switches. With this method, measurements show that 54% efficiency is achieved for just a 20 mV transducer output voltage and 30% of the chip area is saved. The entire electronic board can fit in one EEG or ECG electrode, and the electronic system can convert the electrode to an active electrode.

  11. Alternative Fuels Data Center

    Science.gov Websites

    procedures to promote the cost-effective use of non-petroleum fuel vehicles and other fleet efficiency improvements. The policies must strive for the use of non-petroleum based fuels at least 90% of the time when

  12. Advances in Distance-Based Hole Cuts on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Pandya, Shishir A.

    2015-01-01

    An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.

  13. Research and emulation of ranging in BPON system

    NASA Astrophysics Data System (ADS)

    Yang, Guangxiang; Tao, Dexin; He, Yan

    2005-12-01

    Ranging is one of the key technologies in Broadband Passive Optical Network based on the ATM (BPON) system. It is complex for software designers and difficult to test. In order to simplify the ranging procedure, enhance its efficiency, and find an appropriate method to verify it, a new ranging procedure that completely satisfies the requirements specified in ITU-T G.983.1 and one verifying method is proposed in this paper. A kind of ranging procedure without serial number (SN) searching function, called one-by-one ranging are developed under the condition of cold PON, cold Optical Network Termination (ONU). The ranging procedure includes the use of OLT and ONU flow charts respectively. By using the network emulation software OPNET, the BPON system is modeled and the ranging procedure is simulated. The emulation experimental results show that the presented ranging procedure can effectively eliminate the collision of burst mode signals between ONUs, which can be ranged one-by-one under the controlling of OLT, while also enhancing the ranging efficiency. As all of the message formats used in this research conform with the ITU-T G.983.1, the ranging procedure can meet the protocol specifications with good interoperability, and is very compatible with products of other manufacturer. According to the present study of ranging procedures, guidelines and principles are provided, Also some design difficulties are eliminated in the software design.

  14. Procedures to evaluate the efficiency of protective clothing worn by operators applying pesticide.

    PubMed

    Espanhol-Soares, Melina; Nociti, Leticia A S; Machado-Neto, Joaquim Gonçalves

    2013-10-01

    The evaluation of the efficiency of whole-body protective clothing against pesticides has already been carried out through field tests and procedures defined by international standards, but there is a need to determine the useful life of these garments to ensure worker safety. The aim of this article is to compare the procedures for evaluating efficiency of two whole-body protective garments, both new and previously used by applicators of herbicides, using a laboratory test with a mannequin and in the field with the operator. The evaluation of the efficiency of protective clothing used both quantitative and qualitative methodologies, leading to a proposal for classification according to efficiency, and determination of the useful life of protective clothing for use against pesticides, based on a quantitative assessment. The procedures used were in accordance with the standards of the modified American Society for Testing and Materials (ASTM) F 1359:2007 and International Organization for Standardization 17491-4. The protocol used in the field was World Health Organization Vector Biology and Control (VBC)/82.1. Clothing tested was personal water repellent and pesticide protective. Two varieties of fabric were tested: Beige (100% cotton) and Camouflaged (31% polyester and 69% cotton). The efficiency in exposure control of the personal protective clothing was measured before use and after 5, 10, 20, and 30 uses and washes under field conditions. Personal protective clothing was worn by workers in the field during the application of the herbicide glyphosate on weed species in mature sugar cane plantations using a knapsack sprayer. The modified ASTM 1359:2007 procedure was chosen as the most appropriate due to its greater repeatability (lower coefficient of variation). This procedure provides quantitative evaluation needed to determine the efficiency and useful life of individual protective clothing, not just at specific points of failure, but according to dermal protection as a whole. The qualitative assessment, which is suitable for verification of garment design and stitching flaws, does not aid in determining useful life, but does complement the quantitative evaluation. The proposed classification is appropriate and accurate for determining the useful life of personal protective clothing against pesticide materials relative to number of uses and washes after each use. For example, the Beige garment had a useful life of 30 uses and washes, while the Camouflaged garment had a useful life of 5 uses and washes. The quantitative evaluation aids in determining the efficiency and useful life of individual protective clothing according to dermal protection as a whole, not just at specific points of failure.

  15. Determining which phenotypes underlie a pleiotropic signal

    PubMed Central

    Majumdar, Arunabha; Haldar, Tanushree; Witte, John S.

    2016-01-01

    Discovering pleiotropic loci is important to understand the biological basis of seemingly distinct phenotypes. Most methods for assessing pleiotropy only test for the overall association between genetic variants and multiple phenotypes. To determine which specific traits are pleiotropic, we evaluate via simulation and application three different strategies. The first is model selection techniques based on the inverse regression of genotype on phenotypes. The second is a subset-based meta-analysis ASSET [Bhattacharjee et al., 2012], which provides an optimal subset of non-null traits. And the third is a modified Benjamini-Hochberg (B-H) procedure of controlling the expected false discovery rate [Benjamini and Hochberg, 1995] in the framework of phenome-wide association study. From our simulations we see that an inverse regression based approach MultiPhen [O’Reilly et al., 2012] is more powerful than ASSET for detecting overall pleiotropic association, except for when all the phenotypes are associated and have genetic effects in the same direction. For determining which specific traits are pleiotropic, the modified B-H procedure performs consistently better than the other two methods. The inverse regression based selection methods perform competitively with the modified B-H procedure only when the phenotypes are weakly correlated. The efficiency of ASSET is observed to lie below and in between the efficiency of the other two methods when the traits are weakly and strongly correlated, respectively. In our application to a large GWAS, we find that the modified B-H procedure also performs well, indicating that this may be an optimal approach for determining the traits underlying a pleiotropic signal. PMID:27238845

  16. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    PubMed

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  17. LC-MS metabolic profiling of Arabidopsis thaliana plant leaves and cell cultures: optimization of pre-LC-MS procedure parameters.

    PubMed

    t'Kindt, Ruben; De Veylder, Lieven; Storme, Michael; Deforce, Dieter; Van Bocxlaer, Jan

    2008-08-01

    This study treats the optimization of methods for homogenizing Arabidopsis thaliana plant leaves as well as cell cultures, and extracting their metabolites for metabolomics analysis by conventional liquid chromatography electrospray ionization mass spectrometry (LC-ESI/MS). Absolute recovery, process efficiency and procedure repeatability have been compared between different pre-LC-MS homogenization/extraction procedures through the use of samples fortified before extraction with a range of representative metabolites. Hereby, the magnitude of the matrix effect observed in the ensuing LC-MS based metabolomics analysis was evaluated. Based on relative recovery and repeatability of key metabolites, comprehensiveness of extraction (number of m/z-retention time pairs) and clean-up potential of the approach (minimum matrix effects), the most appropriate sample pre-treatment was adopted. It combines liquid nitrogen homogenization for plant leaves with thermomixer based extraction using MeOH/H(2)O 80/20. As such, an efficient and highly reproducible LC-MS plant metabolomics set-up is achieved, as illustrated by the obtained results for both LC-MS (8.88%+/-5.16 versus 7.05%+/-4.45) and technical variability (12.53%+/-11.21 versus 9.31%+/-6.65) data in a comparative investigation of A. thaliana plant leaves and cell cultures, respectively.

  18. Exposure assessment of mobile phone base station radiation in an outdoor environment using sequential surrogate modeling.

    PubMed

    Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom

    2013-05-01

    Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.

  19. Quantitative metabolomics of the thermophilic methylotroph Bacillus methanolicus.

    PubMed

    Carnicer, Marc; Vieira, Gilles; Brautaset, Trygve; Portais, Jean-Charles; Heux, Stephanie

    2016-06-01

    The gram-positive bacterium Bacillus methanolicus MGA3 is a promising candidate for methanol-based biotechnologies. Accurate determination of intracellular metabolites is crucial for engineering this bacteria into an efficient microbial cell factory. Due to the diversity of chemical and cell properties, an experimental protocol validated on B. methanolicus is needed. Here a systematic evaluation of different techniques for establishing a reliable basis for metabolome investigations is presented. Metabolome analysis was focused on metabolites closely linked with B. methanolicus central methanol metabolism. As an alternative to cold solvent based procedures, a solvent-free quenching strategy using stainless steel beads cooled to -20 °C was assessed. The precision, the consistency of the measurements, and the extent of metabolite leakage from quenched cells were evaluated in procedures with and without cell separation. The most accurate and reliable performance was provided by the method without cell separation, as significant metabolite leakage occurred in the procedures based on fast filtration. As a biological test case, the best protocol was used to assess the metabolome of B. methanolicus grown in chemostat on methanol at two different growth rates and its validity was demonstrated. The presented protocol is a first and helpful step towards developing reliable metabolomics data for thermophilic methylotroph B. methanolicus. This will definitely help for designing an efficient methylotrophic cell factory.

  20. Modal-pushover-based ground-motion scaling procedure

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2011-01-01

    Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  1. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  2. Simulation center training as a means to improve resident performance in percutaneous noncontinuous CT-guided fluoroscopic procedures with dose reduction.

    PubMed

    Mendiratta-Lala, Mishal; Williams, Todd R; Mendiratta, Vivek; Ahmed, Hafeez; Bonnett, John W

    2015-04-01

    The purpose of this study was to evaluate the effectiveness of a multifaceted simulation-based resident training for CT-guided fluoroscopic procedures by measuring procedural and technical skills, radiation dose, and procedure times before and after simulation training. A prospective analysis included 40 radiology residents and eight staff radiologists. Residents took an online pretest to assess baseline procedural knowledge. Second-through fourth-year residents' baseline technical skills with a procedural phantom were evaluated. First-through third-year residents then underwent formal didactic and simulation-based procedural and technical training with one of two interventional radiologists and followed the training with 1 month of supervised phantom-based practice. Thereafter, residents underwent final written and practical examinations. The practical examination included essential items from a 20-point checklist, including site and side marking, consent, time-out, and sterile technique along with a technical skills portion assessing pedal steps, radiation dose, needle redirects, and procedure time. The results indicated statistically significant improvement in procedural and technical skills after simulation training. For residents, the median number of pedal steps decreased by three (p=0.001), median dose decreased by 15.4 mGy (p<0.001), median procedure time decreased by 4.0 minutes (p<0.001), median number of needle redirects decreased by 1.0 (p=0.005), and median number of 20-point checklist items successfully completed increased by three (p<0.001). The results suggest that procedural skills can be acquired and improved by simulation-based training of residents, regardless of experience. CT simulation training decreases procedural time, decreases radiation dose, and improves resident efficiency and confidence, which may transfer to clinical practice with improved patient care and safety.

  3. DEVELOPMENT OF AGENTS AND PROCEDURES FOR DECONTAMINATION OF THE YANKEE REACTOR PRIMARY COOLANT SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, R.M.

    1959-03-01

    Developments relative to decontamination achieved under the Yankee Reasearch and Development program are reported. The decontamination of a large test loop which had been used to conduct corrosion rate studies for the Yankee reactor program is described. The basic permanganate-citrate decontamination procedure suggested for application in Yankee reactor primary system cleanup was used. A study of the chemistry of this decontamination operation is presented, together with conclusions pertaining to the effectiveness of the solutions under the conditions studied. In an attempt to further improve the efficiency of the procedure, an additional series of static and dynamic tests was performcd usingmore » contaminated sections of stainless steel tubing from the original SlW steam generator. Survival variables in the process (reagent composition, contact time, temperature, and flow velocity) were studied. The changes in decontamination efficiency produced by these variations are discussed and compared with results obtained throughthe use of similar procedures. Based on the observations made, conclusions are drawn concerning the optimum conditions for this cleanup process, a new set of suggested basic permanganate-citrate decontamination instructions is presented, and recommendations are made concerning future studies involving this procedure. (auth)« less

  4. An intelligent knowledge-based and customizable home care system framework with ubiquitous patient monitoring and alerting techniques.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.

  5. An Intelligent Knowledge-Based and Customizable Home Care System Framework with Ubiquitous Patient Monitoring and Alerting Techniques

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650

  6. Development of a Flexible Computerized Management Infrastructure for a Commercial Nuclear Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Syed Firasat; Hajek, Brian K.; Usman, Shoaib

    The report emphasizes smooth transition from paper-based procedure systems (PBPSs) to computer-based procedure systems (CBPSs) for the existing commercial nuclear power plants in the U.S. The expected advantages and of the transition are mentioned including continued, safe and efficient operation of the plants under their recently acquired or desired extended licenses. The report proposes a three-stage survey to aid in developing a national strategic plan for the transition from PBPSs to CBPSs. It also includes a comprehensive questionnaire that can be readily used for the first stage of the suggested survey.

  7. When is good, good enough? Methodological pragmatism for sustainable guideline development.

    PubMed

    Browman, George P; Somerfield, Mark R; Lyman, Gary H; Brouwers, Melissa C

    2015-03-06

    Continuous escalation in methodological and procedural rigor for evidence-based processes in guideline development is associated with increasing costs and production delays that threaten sustainability. While health research methodologists are appropriately responsible for promoting increasing rigor in guideline development, guideline sponsors are responsible for funding such processes. This paper acknowledges that other stakeholders in addition to methodologists should be more involved in negotiating trade-offs between methodological procedures and efficiency in guideline production to produce guidelines that are 'good enough' to be trustworthy and affordable under specific circumstances. The argument for reasonable methodological compromise to meet practical circumstances is consistent with current implicit methodological practice. This paper proposes a conceptual tool as a framework to be used by different stakeholders in negotiating, and explicitly reporting, reasonable compromises for trustworthy as well as cost-worthy guidelines. The framework helps fill a transparency gap in how methodological choices in guideline development are made. The principle, 'when good is good enough' can serve as a basis for this approach. The conceptual tool 'Efficiency-Validity Methodological Continuum' acknowledges trade-offs between validity and efficiency in evidence-based guideline development and allows for negotiation, guided by methodologists, of reasonable methodological compromises among stakeholders. Collaboration among guideline stakeholders in the development process is necessary if evidence-based guideline development is to be sustainable.

  8. A mathematical procedure to predict optical performance of CPCs

    NASA Astrophysics Data System (ADS)

    Yu, Y. M.; Yu, M. J.; Tang, R. S.

    2016-08-01

    To evaluate the optical performance of a CPC based concentrating photovoltaic system, it is essential to find the angular dependence of optical efficiency of compound parabolic concentrator (CPC-θe ) where the incident angle of solar rays on solar cells is restricted within θe for the radiation over its acceptance angle. In this work, a mathematical procedure was developed to calculate the optical efficiency of CPC-θe for radiation incident at any angle based radiation transfer within CPC-θe . Calculations show that, given the acceptance half-angle (θa ), the annual radiation of full CPC-θe increases with the increase of θe and the CPC without restriction of exit angle (CPC-90) annually collects the most radiation due to large geometry (Ct ); whereas for truncated CPCs with identical θa and Ct , the annual radiation collected by CPC-θe is almost identical to that by CPC-90, even slightly higher. Calculations also indicate that the annual radiation on the absorber of CPC-θe at the angle larger than θe decrease with the increase of θe but always less than that of CPC-90, and this implies that the CPC-θe based PV system is more efficient than CPC-90 based PV system because the radiation on solar cells incident at large angle is poorly converted into electricity.

  9. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  10. Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.

    PubMed

    Sugino, T; Kawahira, H; Nakamura, R

    2014-09-01

       Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information.    Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits.    Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently.    Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.

  11. Efficiency of personal dosimetry methods in vascular interventional radiology.

    PubMed

    Bacchim Neto, Fernando Antonio; Alves, Allan Felipe Fattori; Mascarenhas, Yvone Maria; Giacomini, Guilherme; Maués, Nadine Helena Pelegrino Bastos; Nicolucci, Patrícia; de Freitas, Carlos Clayton Macedo; Alvarez, Matheus; Pina, Diana Rodrigues de

    2017-05-01

    The aim of the present study was to determine the efficiency of six methods for calculate the effective dose (E) that is received by health professionals during vascular interventional procedures. We evaluated the efficiency of six methods that are currently used to estimate professionals' E, based on national and international recommendations for interventional radiology. Equivalent doses on the head, neck, chest, abdomen, feet, and hands of seven professionals were monitored during 50 vascular interventional radiology procedures. Professionals' E was calculated for each procedure according to six methods that are commonly employed internationally. To determine the best method, a more efficient E calculation method was used to determine the reference value (reference E) for comparison. The highest equivalent dose were found for the hands (0.34±0.93mSv). The two methods that are described by Brazilian regulations overestimated E by approximately 100% and 200%. The more efficient method was the one that is recommended by the United States National Council on Radiological Protection and Measurements (NCRP). The mean and median differences of this method relative to reference E were close to 0%, and its standard deviation was the lowest among the six methods. The present study showed that the most precise method was the one that is recommended by the NCRP, which uses two dosimeters (one over and one under protective aprons). The use of methods that employ at least two dosimeters are more efficient and provide better information regarding estimates of E and doses for shielded and unshielded regions. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Efficient etching-free transfer of high quality, large-area CVD grown graphene onto polyvinyl alcohol films

    NASA Astrophysics Data System (ADS)

    Marta, Bogdan; Leordean, Cosmin; Istvan, Todor; Botiz, Ioan; Astilean, Simion

    2016-02-01

    Graphene transfer is a procedure of paramount importance for the production of graphene-based electronic devices. The transfer procedure can affect the electronic properties of the transferred graphene and can be detrimental for possible applications both due to procedure induced defects which can appear and due to scalability of the method. Hence, it is important to investigate new transfer methods for graphene that are less time consuming and show great promise. In the present study we propose an efficient, etching-free transfer method that consists in applying a thin polyvinyl alcohol layer on top of the CVD grown graphene on Cu and then peeling-off the graphene onto the polyvinyl alcohol film. We investigate the quality of the transferred graphene before and after the transfer, using Raman spectroscopy and imaging as well as optical and atomic force microscopy techniques. This simple transfer method is scalable and can lead to complete transfer of graphene onto flexible and transparent polymer support films without affecting the quality of the graphene during the transfer procedure.

  13. Anesthesiology and gastroenterology.

    PubMed

    de Villiers, Willem J S

    2009-03-01

    A successful population-based colorectal cancer screening requires efficient colonoscopy practices that incorporate high throughput, safety, and patient satisfaction. There are several different modalities of nonanesthesiologist-administered sedation currently available and in development that may fulfill these requirements. Modern-day gastroenterology endoscopic procedures are complex and demand the full attention of the attending gastroenterologist and the complete cooperation of the patient. Many of these procedures will also require the anesthesiologist's knowledge, skills, abilities, and experience to ensure optimal procedure results and good patient outcomes. The goal of this review is (1) to provide a gastroenterology perspective on the use of propofol in gastroenterology endoscopic practice, and (2) to describe newer GI endoscopy procedures that gastroenterologists perform that might involve anesthesiologists.

  14. Design of an Integrated Thermoelectric Generator Power Converter for Ultra-Low Power and Low Voltage Body Energy Harvesters aimed at EEG/ECG Active Electrodes

    NASA Astrophysics Data System (ADS)

    Ataei, Milad; Robert, Christian; Boegli, Alexis; Farine, Pierre-André

    2014-11-01

    This paper describes a design procedure for an efficient body thermal energy harvesting integrated power converter. This procedure is based on loss examination for a selfpowered medical device. All optimum system parameters are calculated respecting the transducer constraints and the application form factor. It is found that it is possible to optimize converter's working frequency with proper design of its pulse generator circuit. At selected frequency, it has been demonstrated that wide area voltage doubler can be eliminated at the expense of wider switches. With this method, more than 60% efficiency is achieved in simulation for just 20mV transducer output voltage and 30% of entire chip area is saved.

  15. Three-dimensional self-adaptive grid method for complex flows

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Deiwert, George S.

    1988-01-01

    A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.

  16. Parallax-Robust Surveillance Video Stitching

    PubMed Central

    He, Botao; Yu, Shaohua

    2015-01-01

    This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756

  17. Tracking transcriptional activities with high-content epifluorescent imaging

    NASA Astrophysics Data System (ADS)

    Hua, Jianping; Sima, Chao; Cypert, Milana; Gooden, Gerald C.; Shack, Sonsoles; Alla, Lalitamba; Smith, Edward A.; Trent, Jeffrey M.; Dougherty, Edward R.; Bittner, Michael L.

    2012-04-01

    High-content cell imaging based on fluorescent protein reporters has recently been used to track the transcriptional activities of multiple genes under different external stimuli for extended periods. This technology enhances our ability to discover treatment-induced regulatory mechanisms, temporally order their onsets and recognize their relationships. To fully realize these possibilities and explore their potential in biological and pharmaceutical applications, we introduce a new data processing procedure to extract information about the dynamics of cell processes based on this technology. The proposed procedure contains two parts: (1) image processing, where the fluorescent images are processed to identify individual cells and allow their transcriptional activity levels to be quantified; and (2) data representation, where the extracted time course data are summarized and represented in a way that facilitates efficient evaluation. Experiments show that the proposed procedure achieves fast and robust image segmentation with sufficient accuracy. The extracted cellular dynamics are highly reproducible and sensitive enough to detect subtle activity differences and identify mechanisms responding to selected perturbations. This method should be able to help biologists identify the alterations of cellular mechanisms that allow drug candidates to change cell behavior and thereby improve the efficiency of drug discovery and treatment design.

  18. TRADITIONAL CANISTER-BASED OPEN WASTE MANAGEMENT SYSTEM VERSUS CLOSED SYSTEM: HAZARDOUS EXPOSURE PREVENTION AND OPERATING THEATRE STAFF SATISFACTION.

    PubMed

    Horn, M; Patel, N; MacLellan, D M; Millard, N

    2016-06-01

    Exposure to blood and body fluids is a major concern to health care professionals working in operating rooms (ORs). Thus, it is essential that hospitals use fluid waste management systems that minimise risk to staff, while maximising efficiency. The current study compared the utility of a 'closed' system with a traditional canister-based 'open' system in the OR in a private hospital setting. A total of 30 arthroscopy, urology, and orthopaedic cases were observed. The closed system was used in five, four, and six cases, respectively and the open system was used in nine, two, and four cases, respectively. The average number of opportunities for staff to be exposed to hazardous fluids were fewer for the closed system when compared to the open during arthroscopy and urology procedures. The open system required nearly 3.5 times as much staff time for set-up, maintenance during procedures, and post-procedure disposal of waste. Theatre staff expressed greater satisfaction with the closed system than with the open. In conclusion, compared with the open system, the closed system offers a less hazardous and more efficient method of disposing of fluid waste generated in the OR.

  19. Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design.

    PubMed

    Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G

    2012-01-01

    Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.

  20. Large scale validation of an efficient CRISPR/Cas-based multi gene editing protocol in Escherichia coli.

    PubMed

    Zerbini, Francesca; Zanella, Ilaria; Fraccascia, Davide; König, Enrico; Irene, Carmela; Frattini, Luca F; Tomasi, Michele; Fantappiè, Laura; Ganfini, Luisa; Caproni, Elena; Parri, Matteo; Grandi, Alberto; Grandi, Guido

    2017-04-24

    The exploitation of the CRISPR/Cas9 machinery coupled to lambda (λ) recombinase-mediated homologous recombination (recombineering) is becoming the method of choice for genome editing in E. coli. First proposed by Jiang and co-workers, the strategy has been subsequently fine-tuned by several authors who demonstrated, by using few selected loci, that the efficiency of mutagenesis (number of mutant colonies over total number of colonies analyzed) can be extremely high (up to 100%). However, from published data it is difficult to appreciate the robustness of the technology, defined as the number of successfully mutated loci over the total number of targeted loci. This information is particularly relevant in high-throughput genome editing, where repetition of experiments to rescue missing mutants would be impractical. This work describes a "brute force" validation activity, which culminated in the definition of a robust, simple and rapid protocol for single or multiple gene deletions. We first set up our own version of the CRISPR/Cas9 protocol and then we evaluated the mutagenesis efficiency by changing different parameters including sequence of guide RNAs, length and concentration of donor DNAs, and use of single stranded and double stranded donor DNAs. We then validated the optimized conditions targeting 78 "dispensable" genes. This work led to the definition of a protocol, featuring the use of double stranded synthetic donor DNAs, which guarantees mutagenesis efficiencies consistently higher than 10% and a robustness of 100%. The procedure can be applied also for simultaneous gene deletions. This work defines for the first time the robustness of a CRISPR/Cas9-based protocol based on a large sample size. Since the technical solutions here proposed can be applied to other similar procedures, the data could be of general interest for the scientific community working on bacterial genome editing and, in particular, for those involved in synthetic biology projects requiring high throughput procedures.

  1. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.

  2. A Web-based Multimedia Program Before Colonoscopy Increased Knowledge and Decreased Anxiety, Sedation Requirement, and Procedure Time.

    PubMed

    Parker, Siddhartha; Zipursky, Jonathan; Ma, Helen; Baumblatt, Geri-Lynn; Siegel, Corey A

    2018-07-01

    Assess the impact of a web-based multimedia patient engagement program on patient anxiety, perception and knowledge about the colonoscopy in addition to procedure outcomes. The success of patients coming for a colonoscopy for colorectal cancer screening is dependent in part on patients' understanding of the preparation and of the procedure. Patients were randomized to use either our institution's standard preprocedure colonoscopy packet or a web-based multimedia patient engagement program (Emmi Solutions) before their scheduled procedure. On the day of colonoscopy, all participants completed a survey including questions to assess knowledge and perception of colonoscopy, in addition to the State Trait Anxiety Inventory. We also collected procedure data including medication doses and procedure time. Patients in the experimental group correctly answered knowledge questions (82%) more often than the control group (74%) (P=0.0003). More than half (58%) of patients in the experimental group felt this intervention reduced their anxiety about the procedure, and the State Trait Anxiety Inventory anxiety score was lower in the experimental group (P=0.026). Patients who viewed the program required less midazolam (3.66 vs. 4.46 mg, P=0.0035) and total procedure time was shorter (24.8 vs. 29 min, P=0.024). A web-based multimedia patient engagement program watched before colonoscopy decreased patient anxiety, medication requirements, and procedure time while increasing knowledge. This intervention could help patients understand and feel more comfortable about colonoscopy leading to increased screening rates while increasing efficiency and decreasing recovery time.

  3. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    NASA Astrophysics Data System (ADS)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  4. Simulation-based educational curriculum for fluoroscopically guided lumbar puncture improves operator confidence and reduces patient dose.

    PubMed

    Faulkner, Austin R; Bourgeois, Austin C; Bradley, Yong C; Hudson, Kathleen B; Heidel, R Eric; Pasciak, Alexander S

    2015-05-01

    Fluoroscopically guided lumbar puncture (FGLP) is a commonly performed procedure with increased success rates relative to bedside technique. However, FGLP also exposes both patient and staff to ionizing radiation. The purpose of this study was to determine if the use of a simulation-based FGLP training program using an original, inexpensive lumbar spine phantom could improve operator confidence and efficiency, while also reducing patient dose. A didactic and simulation-based FGLP curriculum was designed, including a 1-hour lecture and hands-on training with a lumbar spine phantom prototype developed at our institution. Six incoming post-graduate year 2 (PGY-2) radiology residents completed a short survey before taking the course, and each resident practiced 20 simulated FGLPs using the phantom before their first clinical procedure. Data from the 114 lumbar punctures (LPs) performed by the six trained residents (prospective cohort) were compared to data from 514 LPs performed by 17 residents who did not receive simulation-based training (retrospective cohort). Fluoroscopy time (FT), FGLP success rate, and indication were compared. There was a statistically significant reduction in average FT for the 114 procedures performed by the prospective study cohort compared to the 514 procedures performed by the retrospective cohort. This held true for all procedures in aggregate, LPs for myelography, and all procedures performed for a diagnostic indication. Aggregate FT for the prospective group (0.87 ± 0.68 minutes) was significantly lower compared to the retrospective group (1.09 ± 0.65 minutes) and resulted in a 25% reduction in average FT (P = .002). There was no statistically significant difference in the number of failed FGLPs between the two groups. Our simulation-based FGLP curriculum resulted in improved operator confidence and reduced FT. These changes suggest that resident procedure efficiency was improved, whereas patient dose was reduced. The FGLP training program was implemented by radiology residents and required a minimal investment of time and resources. The LP spine phantom used during training was inexpensive, durable, and effective. In addition, the phantom is compatible with multiple modalities including fluoroscopy, computed tomography, and ultrasound and could be easily adapted to other applications such as facet injections or joint arthrograms. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  5. Frameless robotically targeted stereotactic brain biopsy: feasibility, diagnostic yield, and safety.

    PubMed

    Bekelis, Kimon; Radwan, Tarek A; Desai, Atman; Roberts, David W

    2012-05-01

    Frameless stereotactic brain biopsy has become an established procedure in many neurosurgical centers worldwide. Robotic modifications of image-guided frameless stereotaxy hold promise for making these procedures safer, more effective, and more efficient. The authors hypothesized that robotic brain biopsy is a safe, accurate procedure, with a high diagnostic yield and a safety profile comparable to other stereotactic biopsy methods. This retrospective study included 41 patients undergoing frameless stereotactic brain biopsy of lesions (mean size 2.9 cm) for diagnostic purposes. All patients underwent image-guided, robotic biopsy in which the SurgiScope system was used in conjunction with scalp fiducial markers and a preoperatively selected target and trajectory. Forty-five procedures, with 50 supratentorial targets selected, were performed. The mean operative time was 44.6 minutes for the robotic biopsy procedures. This decreased over the second half of the study by 37%, from 54.7 to 34.5 minutes (p < 0.025). The diagnostic yield was 97.8% per procedure, with a second procedure being diagnostic in the single nondiagnostic case. Complications included one transient worsening of a preexisting deficit (2%) and another deficit that was permanent (2%). There were no infections. Robotic biopsy involving a preselected target and trajectory is safe, accurate, efficient, and comparable to other procedures employing either frame-based stereotaxy or frameless, nonrobotic stereotaxy. It permits biopsy in all patients, including those with small target lesions. Robotic biopsy planning facilitates careful preoperative study and optimization of needle trajectory to avoid sulcal vessels, bridging veins, and ventricular penetration.

  6. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  7. A vibration-based health monitoring program for a large and seismically vulnerable masonry dome

    NASA Astrophysics Data System (ADS)

    Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.

    2017-05-01

    Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.

  8. Accurate wavelengths for X-ray spectroscopy and the NIST hydrogen-like ion database

    NASA Astrophysics Data System (ADS)

    Kotochigova, S. A.; Kirby, K. P.; Brickhouse, N. S.; Mohr, P. J.; Tupitsyn, I. I.

    2005-06-01

    We have developed an ab initio multi-configuration Dirac-Fock-Sturm method for the precise calculation of X-ray emission spectra, including energies, transition wavelengths and transition probabilities. The calculations are based on non-orthogonal basis sets, generated by solving the Dirac-Fock and Dirac-Fock-Sturm equations. Inclusion of Sturm functions into the basis set provides an efficient description of correlation effects in highly charged ions and fast convergence of the configuration interaction procedure. A second part of our study is devoted to developing a theoretical procedure and creating an interactive database to generate energies and transition frequencies for hydrogen-like ions. This procedure is highly accurate and based on current knowledge of the relevant theory, which includes relativistic, quantum electrodynamic, recoil, and nuclear size effects.

  9. Numerical solution of quadratic matrix equations for free vibration analysis of structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1975-01-01

    This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.

  10. [Catheter-related bladder discomfort after urological surgery: importance of the type of surgery and efficiency of treatment by clonazepam].

    PubMed

    Maro, S; Zarattin, D; Baron, T; Bourez, S; de la Taille, A; Salomon, L

    2014-09-01

    Bladder catheter can induce a Catheter-Related Bladder Discomfort (CRBD). Antagonist of muscarinic receptor is the gold standard treatment. Clonazepam is an antimuscarinic, muscle relaxing oral drug. The aim of this study is to look for a correlation between the type of surgical procedure and the existence of CRBD and to evaluate the efficiency of clonazepam. One hundred patients needing bladder catheter were evaluated. Sexe, age, BMI, presence of diabetes, surgical procedure and existence of CRBD were noted. Pain was evaluated with analogic visual scale. Timing of pain, need for specific treatment by clonazepam and its efficiency were noted. Correlation between preoperative data, type of surgical procedure, existence of CRBD and efficiency of treatment were evaluated. There were 79 men and 21 women (age: 65.9 years, BMI: 25.4). Twelve patients presented diabetes. Surgical procedure concerned prostate in 39 cases, bladder in 19 cases (tumor resections), endo-urology in 20 cases, upper urinary tract in 12 cases (nephrectomy…) and lower urinary tract in 10 cases (sphincter, sub-uretral tape). Forty patients presented CRBD, (pain 4.5 using VAS). This pain occurred 0.6 days after surgery. No correlation was found between preoperative data and CRBD. Bladder resection and endo-urological procedures were surgical procedures which procured CRBD. Clonazepam was efficient in 30 (75 %) out of 40 patients with CRBD. However, it was less efficient in case of bladder tumor resection. CRBD is frequent and occurred immediately after surgery. Bladder resection and endo-urology were the main surgical procedures which induced CRBD. Clonazepam is efficient in 75 %. Bladder resection is the surgical procedure which is the most refractory to treatment. 5. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  11. A novel gel based vehicle for the delivery of acetylcholine in quantitative sudomotor axon reflex testing.

    PubMed

    Sletten, David M; Kimpinski, Kurt; Weigand, Stephen D; Low, Phillip A

    2009-10-05

    This study describes a novel gel based vehicle for the delivery of acetylcholine (ACh) during quantitative sudomotor axon reflex testing (QSART). A dose and current response study were undertaken on 20 healthy control participants to characterize the efficiency of a gel based vehicle for the delivery of ACh. Values obtained for total sweat volume and latency to sweat onset with gel iontophoresis of ACh during QSART were comparable to previously published normative data using solution based vehicles. Patient discomfort, utilizing the gel based vehicle during the QSART procedure, was minimal. Improvement in iontophoresis using the gel formulation as a vehicle for ACh delivery has the potential to lower the voltage required to overcome skin resistance during QSART and may result in improved patient comfort during the procedure.

  12. Massed Group Desensitization in Reduction of Test-Anxiety

    ERIC Educational Resources Information Center

    Dawley, Harold H., Jr.; Wenrich, W. W.

    1973-01-01

    The results of this study of two groups of nursing students, one administered desensitization sessions, the other not, agree with earlier studies which indicate that massed group desensitization is an efficient and efficacious procedure for the reduction of anxiety-based disorders. (Author/KM)

  13. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  14. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  15. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  16. Comparison of the Efficacy and Efficiency of the Use of Virtual Reality Simulation With High-Fidelity Mannequins for Simulation-Based Training of Fiberoptic Bronchoscope Manipulation.

    PubMed

    Jiang, Bailin; Ju, Hui; Zhao, Ying; Yao, Lan; Feng, Yi

    2018-04-01

    This study compared the efficacy and efficiency of virtual reality simulation (VRS) with high-fidelity mannequin in the simulation-based training of fiberoptic bronchoscope manipulation in novices. Forty-six anesthesia residents with no experience in fiberoptic intubation were divided into two groups: VRS (group VRS) and mannequin (group M). After a standard didactic teaching session, group VRS trained 25 times on VRS, whereas group M performed the same process on a mannequin. After training, participants' performance was assessed on a mannequin five consecutive times. Procedure times during training were recorded as pooled data to construct learning curves. Procedure time and global rating scale scores of manipulation ability were compared between groups, as well as changes in participants' confidence after training. Plateaus in the learning curves were achieved after 19 (95% confidence interval = 15-26) practice sessions in group VRS and 24 (95% confidence interval = 20-32) in group M. There was no significant difference in procedure time [13.7 (6.6) vs. 11.9 (4.1) seconds, t' = 1.101, P = 0.278] or global rating scale [3.9 (0.4) vs. 3.8 (0.4), t = 0.791, P = 0.433] between groups. Participants' confidence increased after training [group VRS: 1.8 (0.7) vs. 3.9 (0.8), t = 8.321, P < 0.001; group M = 2.0 (0.7) vs. 4.0 (0.6), t = 13.948, P < 0.001] but did not differ significantly between groups. Virtual reality simulation is more efficient than mannequin in simulation-based training of flexible fiberoptic manipulation in novices, but similar effects can be achieved in both modalities after adequate training.

  17. Procedures in complex systems: the airline cockpit.

    PubMed

    Degani, A; Wiener, E L

    1997-05-01

    In complex human-machine systems, successful operations depend on an elaborate set of procedures which are specified by the operational management of the organization. These procedures indicate to the human operator (in this case the pilot) the manner in which operational management intends to have various tasks done. The intent is to provide guidance to the pilots and to ensure a safe, logical, efficient, and predictable (standardized) means of carrying out the objectives of the job. However, procedures can become a hodge-podge. Inconsistent or illogical procedures may lead to noncompliance by operators. Based on a field study with three major airlines, the authors propose a model for procedure development called the "Four P's": philosophy, policies, procedures, and practices. Using this model as a framework, the authors discuss the intricate issue of designing flight-deck procedures, and propose a conceptual approach for designing any set of procedures. The various factors, both external and internal to the cockpit, that must be considered for procedure design are presented. In particular, the paper addresses the development of procedures for automated cockpits--a decade-long, and highly controversial issue in commercial aviation. Although this paper is based on airline operations, we assume that the principles discussed here are also applicable to other high-risk supervisory control systems, such as space flight, manufacturing process control, nuclear power production, and military operations.

  18. Monitoring the enrichment of virgin olive oil with natural antioxidants by using a new capillary electrophoresis method.

    PubMed

    Nevado, Juan José Berzas; Robledo, Virginia Rodríguez; Callado, Carolina Sánchez-Carnerero

    2012-07-15

    The enrichment of virgin olive oil (VOO) with natural antioxidants contained in various herbs (rosemary, thyme and oregano) was studied. Three different enrichment procedures were used for the solid-liquid extraction of antioxidants present in the herbs to VOO. One involved simply bringing the herbs into contact with the VOO for 190 days; another keeping the herb-VOO mixture under stirring at room temperature (25°C) for 11 days; and the third stirring at temperatures above room level (35-40°C). The efficiency of each procedure was assessed by using a reproducible, efficient, reliable analytical capillary zone electrophoresis (CZE) method to separate and determine selected phenolic compounds (rosmarinic and caffeic acid) in the oil. Prior to electrophoretic separation, the studied antioxidants were isolated from the VOO matrix by using an optimised preconcentration procedure based on solid phase extraction (SPE). The CZE method was optimised and validated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Probing RNA Native Conformational Ensembles with Structural Constraints.

    PubMed

    Fonseca, Rasmus; van den Bedem, Henry; Bernauer, Julie

    2016-05-01

    Noncoding ribonucleic acids (RNA) play a critical role in a wide variety of cellular processes, ranging from regulating gene expression to post-translational modification and protein synthesis. Their activity is modulated by highly dynamic exchanges between three-dimensional conformational substates, which are difficult to characterize experimentally and computationally. Here, we present an innovative, entirely kinematic computational procedure to efficiently explore the native ensemble of RNA molecules. Our procedure projects degrees of freedom onto a subspace of conformation space defined by distance constraints in the tertiary structure. The dimensionality reduction enables efficient exploration of conformational space. We show that the conformational distributions obtained with our method broadly sample the conformational landscape observed in NMR experiments. Compared to normal mode analysis-based exploration, our procedure diffuses faster through the experimental ensemble while also accessing conformational substates to greater precision. Our results suggest that conformational sampling with a highly reduced but fully atomistic representation of noncoding RNA expresses key features of their dynamic nature.

  20. Using Curriculum Based Measures To Identify and Monitor Progress in an Adult Basic Education Program. Final Report.

    ERIC Educational Resources Information Center

    Bean, Rita M.; And Others

    The purpose of a project was to develop and test curriculum-based procedures and measures to monitor and assess the reading and writing progress of adults in a basic education program. The most efficient, reliable, and feasible measure of reading performance from beginning reading level through eighth-grade level was the repeated oral reading…

  1. Strategy Guideline. Proper Water Heater Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoeschele, M.; Springer, D.; German, A.

    2015-04-09

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  2. Strategy Guideline: Proper Water Heater Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoeschele, M.; Springer, D.; German, A.

    2015-04-01

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  3. Utilizing VA Information Technology to Develop Psychiatric Resident Prescription Profiles

    ERIC Educational Resources Information Center

    Rohrbaugh, Robert; Federman, Daniel G.; Borysiuk, Lydia; Sernyak, Michael

    2009-01-01

    Objectives: Feedback about resident prescription practices allows psychiatry educators to ensure that residents have broad prescribing experience and can facilitate practice-based learning initiatives. The authors report on a procedure utilizing U.S. Department of Veterans Affairs' computerized pharmacy records to efficiently construct…

  4. Modal control theory and application to aircraft lateral handling qualities design

    NASA Technical Reports Server (NTRS)

    Srinathkumar, S.

    1978-01-01

    A multivariable synthesis procedure based on eigenvalue/eigenvector assignment is reviewed and is employed to develop a systematic design procedure to meet the lateral handling qualities design objectives of a fighter aircraft over a wide range of flight conditions. The closed loop modal characterization developed provides significant insight into the design process and plays a pivotal role in the synthesis of robust feedback systems. The simplicity of the synthesis algorithm yields an efficient computer aided interactive design tool for flight control system synthesis.

  5. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  6. Lipid-based colloidal carriers for peptide and protein delivery – liposomes versus lipid nanoparticles

    PubMed Central

    Martins, Susana; Sarmento, Bruno; Ferreira, Domingos C; Souto, Eliana B

    2007-01-01

    This paper highlights the importance of lipid-based colloidal carriers and their pharmaceutical implications in the delivery of peptides and proteins for oral and parenteral administration. There are several examples of biomacromolecules used nowadays in the therapeutics, which are promising candidates to be delivered by means of liposomes and lipid nanoparticles, such as solid lipid nanoparticles (SLN) and nanostructured lipid carriers (NLC). Several production procedures can be applied to achieve a high association efficiency between the bioactives and the carrier, depending on the physicochemical properties of both, as well as on the production procedure applied. Generally, this can lead to improved bioavailability, or in case of oral administration a more consistent temporal profile of absorption from the gastrointestinal tract. Advantages and drawbacks of such colloidal carriers are also pointed out. This article describes strategies used for formulation of peptides and proteins, methods used for assessment of association efficiency and practical considerations regarding the toxicological concerns. PMID:18203427

  7. Optical efficiency of solar concentrators by a reverse optical path method.

    PubMed

    Parretta, A; Antonini, A; Milan, E; Stefancich, M; Martinelli, G; Armani, M

    2008-09-15

    A method for the optical characterization of a solar concentrator, based on the reverse illumination by a Lambertian source and measurement of intensity of light projected on a far screen, has been developed. It is shown that the projected light intensity is simply correlated to the angle-resolved efficiency of a concentrator, conventionally obtained by a direct illumination procedure. The method has been applied by simulating simple reflective nonimaging and Fresnel lens concentrators.

  8. Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping

    2016-01-01

    An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.

  9. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  10. Preparation and characterization of electrodes for the NASA Redox storage system

    NASA Technical Reports Server (NTRS)

    Reid, M. A.; Gahn, R. F.; Ling, J. S.; Charleston, J.

    1980-01-01

    Electrodes for the Redox energy storage system based on iron and chromium chloride reactants is discussed. The physical properties of several lots of felt were determined. Procedures were developed for evaluating electrode performance in lab scale cells. Experimental procedures for evaluating electrodes by cyclic voltammetry are described which minimize the IR losses due to the high internal resistance in the felt (distributed resistance). Methods to prepare electrodes which reduced the coevolution of hydrogen at the chromium electrode and eleminate the drop in voltage on discharge occasionally seen with previous electrodes were discussed. Single cells of 0.3329 ft area with improved membranes and electrodes are operating at over 80% voltage efficiency and coulombic efficiencies of over 98% at current densities of 16 to 20 amp % ft.

  11. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  12. 78 FR 49607 - Energy Conservation Program: Test Procedures for Residential Clothes Dryers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... reasonably designed to produce test results which measure energy efficiency, energy use or estimated annual... Energy Conservation Program: Test Procedures for Residential Clothes Dryers; Final Rule #0;#0;Federal... Conservation Program: Test Procedures for Residential Clothes Dryers AGENCY: Office of Energy Efficiency and...

  13. Fast Beam-Based BPM Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertsche, K.; Loos, H.; Nuhn, H.-D.

    2012-10-15

    The Alignment Diagnostic System (ADS) of the LCLS undulator system indicates that the 33 undulator quadrupoles have extremely high position stability over many weeks. However, beam trajectory straightness and lasing efficiency degrade more quickly than this. A lengthy Beam Based Alignment (BBA) procedure must be executed every two to four weeks to re-optimize the X-ray beam parameters. The undulator system includes RF cavity Beam Position Monitors (RFBPMs), several of which are utilized by an automatic feedback system to align the incoming electron-beam trajectory to the undulator axis. The beam trajectory straightness degradation has been traced to electronic drifts of themore » gain and offset of the BPMs used in the beam feedback system. To quickly recover the trajectory straightness, we have developed a fast beam-based procedure to recalibrate the BPMs. This procedure takes advantage of the high-precision monitoring capability of the ADS, which allows highly repeatable positioning of undulator quadrupoles. This report describes the ADS, the position stability of the LCLS undulator quadrupoles, and some results of the new recovery procedure.« less

  14. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  15. Electronic-Imen-Delphi (EID): An Online Conferencing Procedure.

    ERIC Educational Resources Information Center

    Passig, David; Sharbat, Aviva

    2000-01-01

    Examines the efficiency of the Imen-Delphi (ID) technique as an electronic procedure for conferencing that helps participants clarify their opinions and expectations regarding preferable and possible futures. Describes an electronic version of the original ID procedure and tested its efficiency among a group of experts on virtual reality and…

  16. Use of Lean methodology to improve operating room efficiency in hospitals across the Kingdom of Saudi Arabia.

    PubMed

    Hassanain, Mazen; Zamakhshary, Mohammed; Farhat, Ghada; Al-Badr, Ahmed

    2017-04-01

    The objective of this study was to assess whether an intervention on process efficiency using the Lean methodology leads to improved utilization of the operating room (OR), as measured by key performance metrics of OR efficiency. A quasi-experimental design was used to test the impact of the intervention by comparing pre-intervention and post-intervention data on five key performance indicators. The ORs of 12 hospitals were selected across regions of the Kingdom of Saudi Arabia (KSA). The participants were patients treated at these hospitals during the study period. The intervention comprised the following: (i) creation of visual dashboards that enable starting the first case on time; (ii) use of computerized surgical list management; (iii) optimization of time allocation; (iv) development of an operating model with policies and procedures for the pre-anesthesia clinic; and (iv) creation of a governance structure with policies and procedures for day surgeries. The following were the main outcome measures: on-time start for the first case, room turnover times, percent of overrun cases, average weekly procedure volume and OR utilization. The hospital exhibited statistically significant improvements in the following performance metrics: on-time start for the first case, room turnover times and percent of overrun cases. A statistically significant difference in OR utilization or average weekly procedure volumes was not detected. The implementation of a Lean-based intervention targeting process efficiency applied in ORs across various KSA hospitals resulted in encouraging results on some metrics at some sites, suggesting that the approach has the potential to produce significant benefit in the future. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Comparing solar energy alternatives

    NASA Astrophysics Data System (ADS)

    White, J. R.

    1984-03-01

    This paper outlines a computational procedure for comparing the merits of alternative processes to convert solar radiation to heat, electrical power, or chemical energy. The procedure uses the ratio of equipment investment to useful work as an index. Comparisons with conversion counterparts based on conventional fuels are also facilitated by examining this index. The procedure is illustrated by comparisons of (1) photovoltaic converters of differing efficiencies; (2) photovoltaic converters with and without focusing concentrators; (3) photovoltaic conversion plus electrolysis vs photocatalysis for the production of hydrogen; (4) photovoltaic conversion plus plasma arcs vs photocatalysis for nitrogen fixation. Estimates for conventionally-fuelled processes are included for comparison. The reasons why solar-based concepts fare poorly in such comparisons are traced to the low energy density of solar radiation and its low stream time factor resulting from the limited number of daylight hours available and clouds obscuring the sun.

  18. Iodine, a Mild Reagent for the Aromatization of Terpenoids.

    PubMed

    Domingo, Victoriano; Prieto, Consuelo; Silva, Lucia; Rodilla, Jesús M L; Quílez del Moral, José F; Barrero, Alejandro F

    2016-04-22

    Efficient procedures based on the use of iodine for the aromatization of a series of terpenoids possessing diene and homoallylic or allylic alcohol functionalities are described. Different examples are reported as a proof-of-concept study. Furthermore, iodine also proved to mediate the dehydrogenation of testosterone.

  19. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  20. Efficient reanalysis of structures by a direct modification method. [local stiffness modifications of large structures

    NASA Technical Reports Server (NTRS)

    Raibstein, A. I.; Kalev, I.; Pipano, A.

    1976-01-01

    A procedure for the local stiffness modifications of large structures is described. It enables structural modifications without an a priori definition of the changes in the original structure and without loss of efficiency due to multiple loading conditions. The solution procedure, implemented in NASTRAN, involved the decomposed stiffness matrix and the displacement vectors of the original structure. It solves the modified structure exactly, irrespective of the magnitude of the stiffness changes. In order to investigate the efficiency of the present procedure and to test its applicability within a design environment, several real and large structures were solved. The results of the efficiency studies indicate that the break-even point of the procedure varies between 8% and 60% stiffness modifications, depending upon the structure's characteristics and the options employed.

  1. 30 CFR 7.102 - Exhaust gas cooling efficiency test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Exhaust gas cooling efficiency test. 7.102 Section 7.102 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING....102 Exhaust gas cooling efficiency test. (a) Test procedures. (1) Follow the procedures specified in...

  2. Hybrid Grid Techniques for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.

    1996-01-01

    During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.

  3. Layer-by-Layer Molecular Assemblies for Dye-Sensitized Photoelectrosynthesis Cells Prepared by Atomic Layer Deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Degao; Sheridan, Matthew V.; Shan, Bing

    2017-08-30

    In a Dye Sensitized Photoelectrosynthesis Cell (DSPEC) the relative orientation of catalyst and chromophore play important roles. Here we introduce a new, robust, Atomic Layer Deposition (ALD) procedure for the preparation of assemblies on wide bandgap semiconductors. In the procedure, phosphonated metal complex precursors react with metal ion bridging to an external chromophore or catalyst to give assemblies bridged by Al(III), Sn(IV), Ti(IV), or Zr(IV) metal oxide units as bridges. The procedure has been extended to chromophore-catalyst assemblies for water oxidation catalysis. A SnO2 bridged assembly on SnO2/TiO2 core/shell electrodes undergoes water splitting with an incident photon conversion efficiency (IPCE)more » of 17.1% at 440 nm. Reduction of water at a Ni(II)-based catalyst on NiO films has been shown to give H2. Compared to conventional solution-based procedures, the ALD approach offers significant advantages in scope and flexibility for the preparation of stable surface structures.« less

  4. Improved pupil dilation with medication-soaked pledget sponges.

    PubMed

    Weddle, Celeste; Thomas, Nikki; Dienemann, Jacqueline

    2013-08-01

    Use of multiple preoperative drops for pupil dilation has been shown to be inexact, to delay surgery, and to cause dissatisfaction among perioperative personnel. This article reports on an evidence-based, quality improvement project to locate and appraise research on improved effectiveness and efficiency of mydriasis (ie, pupillary dilation), and the subsequent implementation of a pledget-sponge procedure for pupil dilation at one ambulatory surgery center. Project leaders used an evidence-based practice model to assess the problem, research options for improvement, define goals, and implement a pilot project to test the new dilation technique. Outcomes from the pilot project showed a reduced number of delays caused by poor pupil dilation and a decrease in procedure turnover time. The project team solicited informal feedback from preoperative nurses, which reflected increased satisfaction in preparing patients for cataract procedures. After facility administrators and surgeons accepted the procedure change, it was adopted for preoperative use for all patients undergoing cataract surgery at the ambulatory surgery center. Copyright © 2013 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  5. A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.

    PubMed

    Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei

    2013-05-30

    Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Functional Metagenomics: Construction and High-Throughput Screening of Fosmid Libraries for Discovery of Novel Carbohydrate-Active Enzymes.

    PubMed

    Ufarté, Lisa; Bozonnet, Sophie; Laville, Elisabeth; Cecchini, Davide A; Pizzut-Serin, Sandra; Jacquiod, Samuel; Demanèche, Sandrine; Simonet, Pascal; Franqueville, Laure; Veronese, Gabrielle Potocki

    2016-01-01

    Activity-based metagenomics is one of the most efficient approaches to boost the discovery of novel biocatalysts from the huge reservoir of uncultivated bacteria. In this chapter, we describe a highly generic procedure of metagenomic library construction and high-throughput screening for carbohydrate-active enzymes. Applicable to any bacterial ecosystem, it enables the swift identification of functional enzymes that are highly efficient, alone or acting in synergy, to break down polysaccharides and oligosaccharides.

  7. Extraction of microalgae derived lipids with supercritical carbon dioxide in an industrial relevant pilot plant.

    PubMed

    Lorenzen, Jan; Igl, Nadine; Tippelt, Marlene; Stege, Andrea; Qoura, Farah; Sohling, Ulrich; Brück, Thomas

    2017-06-01

    Microalgae are capable of producing up to 70% w/w triglycerides with respect to their dry cell weight. Since microalgae utilize the greenhouse gas CO 2 , they can be cultivated on marginal lands and grow up to ten times faster than terrestrial plants, the generation of algae oils is a promising option for the development of sustainable bioprocesses, that are of interest for the chemical lubricant, cosmetic and food industry. For the first time we have carried out the optimization of supercritical carbon dioxide (SCCO 2 ) mediated lipid extraction from biomass of the microalgae Scenedesmus obliquus and Scenedesmus obtusiusculus under industrrially relevant conditions. All experiments were carried out in an industrial pilot plant setting, according to current ATEX directives, with batch sizes up to 1.3 kg. Different combinations of pressure (7-80 MPa), temperature (20-200 °C) and CO 2 to biomass ratio (20-200) have been tested on the dried biomass. The most efficient conditions were found to be 12 MPa pressure, a temperature of 20 °C and a CO 2 to biomass ratio of 100, resulting in a high extraction efficiency of up to 92%. Since the optimized CO 2 extraction still yields a crude triglyceride product that contains various algae derived contaminants, such as chlorophyll and carotenoids, a very effective and scalable purification procedure, based on cost efficient bentonite based adsorbers, was devised. In addition to the sequential extraction and purification procedure, we present a consolidated online-bleaching procedure for algae derived oils that is realized within the supercritical CO 2 extraction plant.

  8. 78 FR 17925 - Energy Conservation Program for Consumer Products: Decision and Order Granting a Waiver to BSH...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... test procedures that are reasonably designed to produce results that measure energy efficiency, energy... contains one or more design characteristics that prevent testing according to the prescribed test procedure... Department of Energy Residential Dishwasher Test Procedure AGENCY: Office of Energy Efficiency and Renewable...

  9. 78 FR 17927 - Energy Conservation Program for Consumer Products: Decision and Order Granting a Waiver to BSH...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... test procedures that are reasonably designed to produce results that measure energy efficiency, energy... contains one or more design characteristics that prevent testing according to the prescribed test procedure... Department of Energy Residential Dishwasher Test Procedure AGENCY: Office of Energy Efficiency and Renewable...

  10. Electrochemistry of the Hall-Heroult Process for Aluminum Smelting.

    ERIC Educational Resources Information Center

    Haupin, W. E.

    1983-01-01

    Nearly all aluminum is produced by the electrolysis of alumina dissolved in a molten cryolite-based electrolyte, the Hall-Heroult Process. Various aspects of the procedure are discussed, focusing on electrolyte chemistry, dissolution of alumina, electrode reactions, current efficiency, and cell voltage. Suggestions for graduate study related to…

  11. 40 CFR 63.1413 - Compliance demonstration procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... or operators are required to conduct a design evaluation for a small control device. An owner or... for small control devices shall be set based on the design evaluation required by paragraph (a)(3) of... efficiency for a control device or control technology, a design evaluation shall address the composition and...

  12. 10 CFR 431.445 - Determination of small electric motor efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... COMMERCIAL AND INDUSTRIAL EQUIPMENT Small Electric Motors Test Procedures § 431.445 Determination of small... the mechanical and electrical characteristics of that basic model, and (ii) Based on engineering or... Department of Energy records showing the method or methods used; the mathematical model, the engineering or...

  13. Cognitive Changes among Institutionalized Elderly People

    ERIC Educational Resources Information Center

    Navarro, Jose I.; Menacho, Inmaculada; Alcalde, Concepcion; Marchena, Esperanza; Ruiz, Gonzalo; Aguilar, Manuel

    2009-01-01

    The efficiency of different cognitive training procedures in elderly people was studied. Two types of methods to train cognitive and memory functions were compared. One method was based on new technologies and the other one on pencil-and-paper activities. Thirty-six elderly institutionalized people aged 68-94 were trained. Quantitative and memory…

  14. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  15. Supporting the future nuclear workforce with computer-based procedures

    DOE PAGES

    Oxstrand, Johanna; Le Blanc, Katya

    2016-05-01

    Here we see that computer-based tools have dramatically increased ease and efficiency of everyday tasks. Gone are the days of paging through a paper catalog, transcribing product numbers, and calculating totals. Today, a consumer can find a product online with a simple search engine, and then purchase it in a matter of a few clicks. Paper catalogs have their place, but it is hard to imagine life without on-line shopping sites. All tasks conducted in a nuclear power plant are guided by procedures, which helps ensure safe and reliable operation of the plants. One prominent goal of the nuclear industrymore » is to minimize the risk of human errors. To achieve this goal one has to ensure tasks are correctly and consistently executed. This is partly achieved by training and by a structured approach to task execution, which is provided by procedures and work instructions. Procedures are used in the nuclear industry to direct workers' actions in a proper sequence. The governing idea is to minimize the reliance on memory and choices made in the field. However, the procedure document may not contain sufficient information to successfully complete the task. Therefore, the worker might have to carry additional documents such as turnover sheets, operation experience, drawings, and other procedures to the work site. The nuclear industry is operated with paper procedures like paper catalogs of the past. A field worker may carry a large stack of documents needed to complete a task to the field. Even though the paper process has helped keep the industry safe for decades, there are limitations to using paper. Paper procedures are static (i.e., the content does not change after the document is printed), difficult to search, and rely heavily on the field worker’s situational awareness and ability to consistently meet the high expectation of human performance excellence. With computer-based procedures (CBPs) that stack of papers may be reduced to the size of a small tablet or even a smart phone. Instead of manually matching equipment identification numbers listed in the procedure with the number on the physical equipment the field worker can simply scan a barcode to ensure the correct valve is opened while simultaneously creating a record. Instead of navigating through a maze of cross-references, CBPs enable intelligent work path navigation which accounts for past decisions and observation, thereby enabling more efficient and safe task completion.« less

  16. Supporting the future nuclear workforce with computer-based procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya

    Here we see that computer-based tools have dramatically increased ease and efficiency of everyday tasks. Gone are the days of paging through a paper catalog, transcribing product numbers, and calculating totals. Today, a consumer can find a product online with a simple search engine, and then purchase it in a matter of a few clicks. Paper catalogs have their place, but it is hard to imagine life without on-line shopping sites. All tasks conducted in a nuclear power plant are guided by procedures, which helps ensure safe and reliable operation of the plants. One prominent goal of the nuclear industrymore » is to minimize the risk of human errors. To achieve this goal one has to ensure tasks are correctly and consistently executed. This is partly achieved by training and by a structured approach to task execution, which is provided by procedures and work instructions. Procedures are used in the nuclear industry to direct workers' actions in a proper sequence. The governing idea is to minimize the reliance on memory and choices made in the field. However, the procedure document may not contain sufficient information to successfully complete the task. Therefore, the worker might have to carry additional documents such as turnover sheets, operation experience, drawings, and other procedures to the work site. The nuclear industry is operated with paper procedures like paper catalogs of the past. A field worker may carry a large stack of documents needed to complete a task to the field. Even though the paper process has helped keep the industry safe for decades, there are limitations to using paper. Paper procedures are static (i.e., the content does not change after the document is printed), difficult to search, and rely heavily on the field worker’s situational awareness and ability to consistently meet the high expectation of human performance excellence. With computer-based procedures (CBPs) that stack of papers may be reduced to the size of a small tablet or even a smart phone. Instead of manually matching equipment identification numbers listed in the procedure with the number on the physical equipment the field worker can simply scan a barcode to ensure the correct valve is opened while simultaneously creating a record. Instead of navigating through a maze of cross-references, CBPs enable intelligent work path navigation which accounts for past decisions and observation, thereby enabling more efficient and safe task completion.« less

  17. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  18. SUPPORTING THE INDUSTRY BY DEVELOPING A DESIGN GUIDANCE FOR COMPUTER-BASED PROCEDURES FOR FIELD WORKERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; LeBlanc, Katya

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human interacts with the procedures, which can be achieved through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools and dynamic step presentation. As a step toward the goal of improving procedure use performance, the U.S. Department of Energy Light Water Reactor Sustainability Programmore » researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with CBPs. The main purpose of the CBP research conducted at the Idaho National Laboratory was to provide design guidance to the nuclear industry to be used by both utilities and vendors. After studying existing design guidance for CBP systems, the researchers concluded that the majority of the existing guidance is intended for control room CBP systems, and does not necessarily address the challenges of designing CBP systems for instructions carried out in the field. Further, the guidance is often presented on a high level, which leaves the designer to interpret what is meant by the guidance and how to specifically implement it. The authors developed a design guidance to provide guidance specifically tailored to instructions that are carried out in the field based.« less

  19. Air Traffic Management Technology Demonstration-1 Concept of Operations (ATD-1 ConOps), Version 2.0

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Johnson, William C.; Swenson, Harry N.; Robinson, John E.; Prevot, Tom; Callantine, Todd J.; Scardina, John; Greene, Michael

    2013-01-01

    This document is an update to the operations and procedures envisioned for NASA s Air Traffic Management (ATM) Technology Demonstration #1 (ATD-1). The ATD-1 Concept of Operations (ConOps) integrates three NASA technologies to achieve high throughput, fuel-efficient arrival operations into busy terminal airspace. They are Traffic Management Advisor with Terminal Metering (TMA-TM) for precise time-based schedules to the runway and points within the terminal area, Controller-Managed Spacing (CMS) decision support tools for terminal controllers to better manage aircraft delay using speed control, and Flight deck Interval Management (FIM) avionics and flight crew procedures to conduct airborne spacing operations. The ATD-1 concept provides de-conflicted and efficient operations of multiple arrival streams of aircraft, passing through multiple merge points, from top-of-descent (TOD) to the Final Approach Fix. These arrival streams are Optimized Profile Descents (OPDs) from en route altitude to the runway, using primarily speed control to maintain separation and schedule. The ATD-1 project is currently addressing the challenges of integrating the three technologies, and their implantation into an operational environment. The ATD-1 goals include increasing the throughput of high-density airports, reducing controller workload, increasing efficiency of arrival operations and the frequency of trajectory-based operations, and promoting aircraft ADS-B equipage.

  20. NASA's ATM Technology Demonstration-1: Integrated Concept of Arrival Operations

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Swenson, Harry N.; Prevot, Thomas; Callantine, Todd J.

    2012-01-01

    This paper describes operations and procedures envisioned for NASA s Air Traffic Management (ATM) Technology Demonstration #1 (ATD-1). The ATD-1 Concept of Operations (ConOps) demonstration will integrate three NASA technologies to achieve high throughput, fuel-efficient arrival operations into busy terminal airspace. They are Traffic Management Advisor with Terminal Metering (TMA-TM) for precise time-based schedules to the runway and points within the terminal area, Controller-Managed Spacing (CMS) decision support tools for terminal controllers to better manage aircraft delay using speed control, and Flight deck Interval Management (FIM) avionics and flight crew procedures to conduct airborne spacing operations. The ATD-1 concept provides de-conflicted and efficient operations of multiple arrival streams of aircraft, passing through multiple merge points, from top-of-descent (TOD) to touchdown. It also enables aircraft to conduct Optimized Profile Descents (OPDs) from en route altitude to the runway, using primarily speed control to maintain separation and schedule. The ATD-1 project is currently addressing the challenges of integrating the three technologies, and implantation into an operational environment. Goals of the ATD-1 demonstration include increasing the throughput of high-density airports, reducing controller workload, increasing efficiency of arrival operations and the frequency of trajectory-based operations, and promoting aircraft ADS-B equipage.

  1. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    NASA Astrophysics Data System (ADS)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to the problems facing the industry include in-depth, multiple fault failure training which tests the operator's knowledge of the situation. This builds operator collaboration, competence and confidence to know what to do, and when to do it in response to an emergency situation. Strict adherence to procedures and rigid compliance to process may not prevent incidents or increase safety; building operators' fundamental skills of collaboration, competence and confidence will.

  2. Assessment of modal-pushover-based scaling procedure for nonlinear response history analysis of ordinary standard bridges

    USGS Publications Warehouse

    Kalkan, E.; Kwong, N.

    2012-01-01

    The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case in the central United States) or when high-intensity records are needed (as is the case in San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure was recently developed to determine scale factors for a small number of records such that the scaled records provide accurate and efficient estimates of “true” median structural responses. The adjective “accurate” refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective “efficient” refers to the record-to-record variability of responses. In this paper, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing Ordinary Standard bridges typical of reinforced concrete bridge construction in California. These bridges are the single-bent overpass, multi-span bridge, curved bridge, and skew bridge. As compared with benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the EDPs. Thus, it is a useful tool for scaling ground motions as input to nonlinear RHAs of Ordinary Standard bridges.

  3. Effective and efficient learning in the operating theater with intraoperative video-enhanced surgical procedure training.

    PubMed

    van Det, M J; Meijerink, W J H J; Hoff, C; Middel, B; Pierie, J P E N

    2013-08-01

    INtraoperative Video Enhanced Surgical procedure Training (INVEST) is a new training method designed to improve the transition from basic skills training in a skills lab to procedural training in the operating theater. Traditionally, the master-apprentice model (MAM) is used for procedural training in the operating theater, but this model lacks uniformity and efficiency at the beginning of the learning curve. This study was designed to investigate the effectiveness and efficiency of INVEST compared to MAM. Ten surgical residents with no laparoscopic experience were recruited for a laparoscopic cholecystectomy training curriculum either by the MAM or with INVEST. After a uniform course in basic laparoscopic skills, each trainee performed six cholecystectomies that were digitally recorded. For 14 steps of the procedure, an observer who was blinded for the type of training determined whether the step was performed entirely by the trainee (2 points), partially by the trainee (1 point), or by the supervisor (0 points). Time measurements revealed the total procedure time and the amount of effective procedure time during which the trainee acted as the operating surgeon. Results were compared between both groups. Trainees in the INVEST group were awarded statistically significant more points (115.8 vs. 70.2; p < 0.001) and performed more steps without the interference of the supervisor (46.6 vs. 18.8; p < 0.001). Total procedure time was not lengthened by INVEST, and the part performed by trainees was significantly larger (69.9 vs. 54.1 %; p = 0.004). INVEST enhances effectiveness and training efficiency for procedural training inside the operating theater without compromising operating theater time efficiency.

  4. Energy-Efficient Next-Generation Passive Optical Networks Based on Sleep Mode and Heuristic Optimization

    NASA Astrophysics Data System (ADS)

    Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik

    2015-05-01

    In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.

  5. Effect of Ozone Addition on Combustion Efficiency of Hydrogen: Liquid-Oxygen Propellant in Small Rockets

    NASA Technical Reports Server (NTRS)

    Miller, Riley O.; Brown, Dwight D.

    1959-01-01

    An experimental study shows that 2 percent by weight ozone in oxygen has little effect on overall reactivity for a range of oxidant-fuel weight ratios from 1 to 6. This conclusion is based on characteristic-velocity measurements in 200-pound-thrust chambers at a pressure of 300 pounds per square inch absolute with low-efficiency injectors. The presence of 9 percent ozone in oxygen also did not affect performance in an efficient chamber. Explosions were encountered when equipment or procedure permitted ozone to concentrate locally. These experiments indicate that even small amounts of ozone in oxygen can cause operational problems.

  6. Dosimetry procedures for an industrial irradiation plant

    NASA Astrophysics Data System (ADS)

    Grahn, Ch.

    Accurate and reliable dosimetry procedures constitute a very important part of process control and quality assurance at a radiation processing plant. γ-Dose measurements were made on the GBS 84 irradiator for food and other products on pallets or in containers. Chemical dosimeters wre exposed in the facility under conditions of the typical plant operation. The choice of the dosimeter systems employed was based on the experience in chemical dosimetry gained over several years. Dose uniformity information was obtained in air, spices, bulbs, feeds, cosmetics, plastics and surgical goods. Most products currently irradiated require dose uniformity which can be efficiently provided by pallet or box irradiators like GBS 84. The radiation performance characteristics and some dosimetry procedures are discussed.

  7. Solution of quadratic matrix equations for free vibration analysis of structures.

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.

  8. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.

    PubMed

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A

    2008-09-01

    The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).

  9. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2013-01-01

    The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942

  10. All-silicon nanorod-based Dammann gratings.

    PubMed

    Li, Zile; Zheng, Guoxing; He, Ping'An; Li, Song; Deng, Qiling; Zhao, Jiangnan; Ai, Yong

    2015-09-15

    Established diffractive optical elements (DOEs), such as Dammann gratings, whose phase profile is controlled by etching different depths into a transparent dielectric substrate, suffer from a contradiction between the complexity of fabrication procedures and the performance of such gratings. In this Letter, we combine the concept of geometric phase and phase modulation in depth, and prove by theoretical analysis and numerical simulation that nanorod arrays etched on a silicon substrate have a characteristic of strong polarization conversion between two circularly polarized states and can act as a highly efficient half-wave plate. More importantly, only by changing the orientation angles of each nanorod can the arrays control the phase of a circularly polarized light, cell by cell. With the above principle, we report the realization of nanorod-based Dammann gratings reaching diffraction efficiencies of 50%-52% in the C-band fiber telecommunications window (1530-1565 nm). In this design, uniform 4×4 spot arrays with an extending angle of 59°×59° can be obtained in the far field. Because of these advantages of the single-step fabrication procedure, accurate phase controlling, and strong polarization conversion, nanorod-based Dammann gratings could be utilized for various practical applications in a range of fields.

  11. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.

    1983-01-01

    Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.

  12. Electrotransformation of Lactobacillus delbrueckii subsp. bulgaricus and L. delbrueckii subsp. lactis with Various Plasmids

    PubMed Central

    Serror, Pascale; Sasaki, Takashi; Ehrlich, S. Dusko; Maguin, Emmanuelle

    2002-01-01

    We describe, for the first time, a detailed electroporation procedure for Lactobacillus delbrueckii. Three L. delbrueckii strains were successfully transformed. Under optimal conditions, the transformation efficiency was 104 transformants per μg of DNA. Using this procedure, we identified several plasmids able to replicate in L. delbrueckii and integrated an integrative vector based on phage integrative elements into the L. delbrueckii subsp. bulgaricus chromosome. These vectors provide a good basis for developing molecular tools for L. delbrueckii and open the field of genetic studies in L. delbrueckii. PMID:11772607

  13. Thermo-viscoelastic analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Lin, Kuen Y.; Hwang, I. H.

    1989-01-01

    The thermo-viscoelastic boundary value problem for anisotropic materials is formulated and a numerical procedure is developed for the efficient analysis of stress and deformation histories in composites. The procedure is based on the finite element method and therefore it is applicable to composite laminates containing geometric discontinuities and complicated boundary conditions. Using the present formulation, the time-dependent stress and strain distributions in both notched and unnotched graphite/epoxy composites have been obtained. The effect of temperature and ply orientation on the creep and relaxation response is also studied.

  14. Performance and Health Test Procedure for Grid Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baggu, Murali M; Smith, Kandler A; Friedl, Andrew

    A test procedure to evaluate the performance and health of field installations of grid-connected battery energy storage systems (BESS) is described. Performance and health metrics captured in the procedures are: round-trip efficiency, standby losses, response time/accuracy, and useable energy/state of charge at different discharge/charge rates over the system's lifetime. The procedures are divided into reference performance tests, which require the system to be put in a test mode and are to be conducted in intervals, and real-time monitoring tests, which collect data during normal operation without interruption. The procedures can be applied on a wide array of BESS with littlemore » modification and can thus support BESS operators in the management of BESS field installations with minimal interruption and expenditure. Simulated results based on a detailed system simulation of a prototype system are provided as guideline.« less

  15. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  16. Event-based knowledge elicitation of operating room management decision-making using scenarios adapted from information systems data

    PubMed Central

    2011-01-01

    Background No systematic process has previously been described for a needs assessment that identifies the operating room (OR) management decisions made by the anesthesiologists and nurse managers at a facility that do not maximize the efficiency of use of OR time. We evaluated whether event-based knowledge elicitation can be used practically for rapid assessment of OR management decision-making at facilities, whether scenarios can be adapted automatically from information systems data, and the usefulness of the approach. Methods A process of event-based knowledge elicitation was developed to assess OR management decision-making that may reduce the efficiency of use of OR time. Hypothetical scenarios addressing every OR management decision influencing OR efficiency were created from published examples. Scenarios are adapted, so that cues about conditions are accurate and appropriate for each facility (e.g., if OR 1 is used as an example in a scenario, the listed procedure is a type of procedure performed at the facility in OR 1). Adaptation is performed automatically using the facility's OR information system or anesthesia information management system (AIMS) data for most scenarios (43 of 45). Performing the needs assessment takes approximately 1 hour of local managers' time while they decide if their decisions are consistent with the described scenarios. A table of contents of the indexed scenarios is created automatically, providing a simple version of problem solving using case-based reasoning. For example, a new OR manager wanting to know the best way to decide whether to move a case can look in the chapter on "Moving Cases on the Day of Surgery" to find a scenario that describes the situation being encountered. Results Scenarios have been adapted and used at 22 hospitals. Few changes in decisions were needed to increase the efficiency of use of OR time. The few changes were heterogeneous among hospitals, showing the usefulness of individualized assessments. Conclusions Our technical advance is the development and use of automated event-based knowledge elicitation to identify suboptimal OR management decisions that decrease the efficiency of use of OR time. The adapted scenarios can be used in future decision-making. PMID:21214905

  17. Event-based knowledge elicitation of operating room management decision-making using scenarios adapted from information systems data.

    PubMed

    Dexter, Franklin; Wachtel, Ruth E; Epstein, Richard H

    2011-01-07

    No systematic process has previously been described for a needs assessment that identifies the operating room (OR) management decisions made by the anesthesiologists and nurse managers at a facility that do not maximize the efficiency of use of OR time. We evaluated whether event-based knowledge elicitation can be used practically for rapid assessment of OR management decision-making at facilities, whether scenarios can be adapted automatically from information systems data, and the usefulness of the approach. A process of event-based knowledge elicitation was developed to assess OR management decision-making that may reduce the efficiency of use of OR time. Hypothetical scenarios addressing every OR management decision influencing OR efficiency were created from published examples. Scenarios are adapted, so that cues about conditions are accurate and appropriate for each facility (e.g., if OR 1 is used as an example in a scenario, the listed procedure is a type of procedure performed at the facility in OR 1). Adaptation is performed automatically using the facility's OR information system or anesthesia information management system (AIMS) data for most scenarios (43 of 45). Performing the needs assessment takes approximately 1 hour of local managers' time while they decide if their decisions are consistent with the described scenarios. A table of contents of the indexed scenarios is created automatically, providing a simple version of problem solving using case-based reasoning. For example, a new OR manager wanting to know the best way to decide whether to move a case can look in the chapter on "Moving Cases on the Day of Surgery" to find a scenario that describes the situation being encountered. Scenarios have been adapted and used at 22 hospitals. Few changes in decisions were needed to increase the efficiency of use of OR time. The few changes were heterogeneous among hospitals, showing the usefulness of individualized assessments. Our technical advance is the development and use of automated event-based knowledge elicitation to identify suboptimal OR management decisions that decrease the efficiency of use of OR time. The adapted scenarios can be used in future decision-making.

  18. Manufacturing of dental pulp cell-based products from human third molars: current strategies and future investigations

    PubMed Central

    Ducret, Maxime; Fabre, Hugo; Degoul, Olivier; Atzeni, Gianluigi; McGuckin, Colin; Forraz, Nico; Alliot-Licht, Brigitte; Mallein-Gerin, Frédéric; Perrier-Groult, Emeline; Farges, Jean-Christophe

    2015-01-01

    In recent years, mesenchymal cell-based products have been developed to improve surgical therapies aimed at repairing human tissues. In this context, the tooth has recently emerged as a valuable source of stem/progenitor cells for regenerating orofacial tissues, with easy access to pulp tissue and high differentiation potential of dental pulp mesenchymal cells. International guidelines now recommend the use of standardized procedures for cell isolation, storage and expansion in culture to ensure optimal reproducibility, efficacy and safety when cells are used for clinical application. However, most dental pulp cell-based medicinal products manufacturing procedures may not be fully satisfactory since they could alter the cells biological properties and the quality of derived products. Cell isolation, enrichment and cryopreservation procedures combined to long-term expansion in culture media containing xeno- and allogeneic components are known to affect cell phenotype, viability, proliferation and differentiation capacities. This article focuses on current manufacturing strategies of dental pulp cell-based medicinal products and proposes a new protocol to improve efficiency, reproducibility and safety of these strategies. PMID:26300779

  19. [Definition and stabilization of processes II. Clinical Processes in a Urology Department].

    PubMed

    Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Diz, Manuel Ramón; Martín, Carlos; López, María Carmen

    2015-01-01

    New models in clinical management seek a clinical practice based on quality, efficacy and efficiency, avoiding variability and improvisation. In this paper we have developed one of the most frequent clinical processes in our speciality, the process based on DRG 311 or transurethral procedures without complications. Along it we will describe its components: Stabilization form, clinical trajectory, cost calculation, and finally the process flowchart.

  20. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  1. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  2. IEEE Photovoltaic Specialists Conference, 20th, Las Vegas, NV, Sept. 26-30, 1988, Conference Record. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    Various papers on photovoltaics are presented. The general topics considered include: amorphous materials and cells; amorphous silicon-based solar cells and modules; amorphous silicon-based materials and processes; amorphous materials characterization; amorphous silicon; high-efficiency single crystal solar cells; multijunction and heterojunction cells; high-efficiency III-V cells; modeling and characterization of high-efficiency cells; LIPS flight experience; space mission requirements and technology; advanced space solar cell technology; space environmental effects and modeling; space solar cell and array technology; terrestrial systems and array technology; terrestrial utility and stand-alone applications and testing; terrestrial concentrator and storage technology; terrestrial stand-alone systems applications; terrestrial systems test and evaluation; terrestrial flatplate and concentrator technology; use of polycrystalline materials; polycrystalline II-VI compound solar cells; analysis of and fabrication procedures for compound solar cells.

  3. Probabilistic seismic loss estimation via endurance time method

    NASA Astrophysics Data System (ADS)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.

    2017-01-01

    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.

  4. Systematic and efficient side chain optimization for molecular docking using a cheapest-path procedure.

    PubMed

    Schumann, Marcel; Armen, Roger S

    2013-05-30

    Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.

  5. A twin purification/enrichment procedure based on two versatile solid/liquid extracting agents for efficient uptake of ultra-trace levels of lorazepam and clonazepam from complex bio-matrices.

    PubMed

    Hemmati, Maryam; Rajabi, Maryam; Asghari, Alireza

    2017-11-17

    In this research work, two consecutive dispersive solid/liquid phase microextractions based on efficient extraction media were developed for the influential and clean pre-concentration of clonazepam and lorazepam from complicated bio-samples. The magnetism nature of the proposed nanoadsorbent proceeded the clean-up step conveniently and swiftly (∼5min), pursued by a further enrichment via a highly effective and rapid emulsification microextraction process (∼4min) based on a deep eutectic solvent (DES). Finally, the instrumental analysis step was practicable via high performance liquid chromatography-ultraviolet detection. The solid phase used was an adequate magnetic nanocomposite termed as polythiophene-sodium dodecyl benzene sulfonate/iron oxide (PTh-DBSNa/Fe 3 O 4 ), easily and cost-effectively prepared by the impressive co-precipitation method followed by the efficient in situ sonochemical oxidative polymerization approach. The identification techniques viz. FESEM, XRD, and EDX certified the supreme physico-chemical properties of this effective nanosorbent. Also the powerful liquid extraction agent, DES, based on bio-degradable choline chloride, possessed a high efficiency, tolerable safety, low cost, and facile and mild synthesis route. The parameters involved in this versatile hyphenated procedure, efficiently evaluated via the central composite design (CCD), showed that the best extraction conditions consisted of an initial pH value of 7.2, 17mg of the PTh-DBSNa/Fe 3 O 4 nanocomposite, 20 air-agitation cycles (first step), 245μL of methanol, 250μL of DES, 440μL of THF, and 8 air-agitation cycles (second step). Under the optimal conditions, the understudied drugs could be accurately determined in the wide linear dynamic ranges (LDRs) of 4.0-3000ngmL -1 and 2.0-2000ngmL -1 for clonazepam and lorazepam, respectively, with low limits of detection (LODs) ranged from 0.7 to 1.0ngmL -1 . The enrichment factor (EF) and percentage extraction recovery (%ER) values were found to be 75 and 57% for clonazepam and 56 and 42% for lorazepam at the spiked level of 75.0ngmL -1 , possessing proper repeatabilities (relative standard deviation values (RSDs) below 5.9%, n=3). These valid analytical features provided quite accurate drug analyses at therapeutically low spans and levels below potentially toxic domains, implying a proper purification/enrichment of the proposed microextraction procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Triazatruxene-Based Hole Transporting Materials for Highly Efficient Perovskite Solar Cells.

    PubMed

    Rakstys, Kasparas; Abate, Antonio; Dar, M Ibrahim; Gao, Peng; Jankauskas, Vygintas; Jacopin, Gwénolé; Kamarauskas, Egidijus; Kazim, Samrana; Ahmad, Shahzada; Grätzel, Michael; Nazeeruddin, Mohammad Khaja

    2015-12-30

    Four center symmetrical star-shaped hole transporting materials (HTMs) comprising planar triazatruxene core and electron-rich methoxy-engineered side arms have been synthesized and successfully employed in (FAPbI3)0.85(MAPbBr3)0.15 perovskite solar cells. These HTMs are obtained from relatively cheap starting materials by adopting facile preparation procedure, without using expensive and complicated purification techniques. Developed compounds have suitable highest occupied molecular orbitals (HOMO) with respect to the valence band level of the perovskite, and time-resolved photoluminescence indicates that hole injection from the valence band of perovskite into the HOMO of triazatruxene-based HTMs is relatively more efficient as compared to that of well-studied spiro-OMeTAD. Remarkable power conversion efficiency over 18% was achieved using 5,10,15-trihexyl-3,8,13-tris(4-methoxyphenyl)-10,15-dihydro-5H-diindolo[3,2-a:3',2'-c]carbazole (KR131) with compositive perovskite absorber. This result demonstrates triazatruxene-based compounds as a new class of HTM for the fabrication of highly efficient perovskite solar cells.

  7. High resolution laser beam induced current images under trichromatic laser radiation: approximation to the solar irradiation.

    PubMed

    Navas, F J; Alcántara, R; Fernández-Lorenzo, C; Martín-Calleja, J

    2010-03-01

    A laser beam induced current (LBIC) map of a photoactive surface is a very useful tool when it is necessary to study the spatial variability of properties such as photoconverter efficiency or factors connected with the recombination of carriers. Obtaining high spatial resolution LBIC maps involves irradiating the photoactive surface with a photonic beam with Gaussian power distribution and with a low dispersion coefficient. Laser emission fulfils these characteristics, but against it is the fact that it is highly monochromatic and therefore has a spectral distribution different to solar emissions. This work presents an instrumental system and procedure to obtain high spatial resolution LBIC maps in conditions approximating solar irradiation. The methodology developed consists of a trichromatic irradiation system based on three sources of laser excitation with emission in the red, green, and blue zones of the electromagnetic spectrum. The relative irradiation powers are determined by either solar spectrum distribution or Planck's emission formula which provides information approximate to the behavior of the system if it were under solar irradiation. In turn, an algorithm and a procedure have been developed to be able to form images based on the scans performed by the three lasers, providing information about the photoconverter efficiency of photovoltaic devices under the irradiation conditions used. This system has been checked with three photosensitive devices based on three different technologies: a commercial silicon photodiode, a commercial photoresistor, and a dye-sensitized solar cell. These devices make it possible to check how the superficial quantum efficiency has areas dependent upon the excitation wavelength while it has been possible to measure global incident photon-to-current efficiency values approximating those that would be obtained under irradiation conditions with sunlight.

  8. Air Traffic Management Technology Demostration: 1 Research and Procedural Testing of Routes

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Kibler, Jennifer L.; Hubbs, Clay E.; Smail, James W.

    2015-01-01

    NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The ATD-1 integrated system consists of the Traffic Management Advisor with Terminal Metering which generates precise time-based schedules to the runway and merge points; Controller Managed Spacing decision support tools which provide controllers with speed advisories and other information needed to meet the schedule; and Flight deck-based Interval Management avionics and procedures which allow flight crews to adjust their speed to achieve precise relative spacing. Initial studies identified air-ground challenges related to the integration of these three scheduling and spacing technologies, and NASA's airborne spacing algorithm was modified to address some of these challenges. The Research and Procedural Testing of Routes human-in-the-loop experiment was then conducted to assess the performance of the new spacing algorithm. The results of this experiment indicate that the algorithm performed as designed, and the pilot participants found the airborne spacing concept, air-ground procedures, and crew interface to be acceptable. However, the researchers concluded that the data revealed issues with the frequency of speed changes and speed reversals.

  9. How [NOT] to Measure a Solar Cell to Get the Highest Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, Keith

    The multibillion-dollar photovoltaic (PV) industry sells products by the watt; the calibration labs measure this parameter at the cell and module level with the lowest possible uncertainty of 1-2 percent. The methods and procedures to achieve a measured 50 percent efficiency on a thin-film solar cell are discussed. This talk will describe methods that ignore procedures that increase the uncertainty. Your questions will be answered concerning 'Everything you Always Wanted to Know about Efficiency Enhancements But Were Afraid to Ask.' The talk will cover a step-by-step procedure using examples found in literature or encountered in customer samples by the Nationalmore » Renewable Energy Laboratory's (NREL's) PV Performance Characterization Group on how to artificially enhance the efficiency. The procedures will describe methods that have been used to enhance the current voltage and fill factor.« less

  10. Design Guidance for Computer-Based Procedures for Field Workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, themore » U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying on other sources of additional information to ensure a functional and accurate understanding of the current plant status (Converse, 1995; Fink, Killian, Hanes, and Naser, 2009; Le Blanc, Oxstrand, and Waicosky, 2012). This report provides design guidance to be used when designing the human-system interaction and the design of the graphical user interface for a CBP system. The guidance is based on human factors research related to the design and usability of CBPs conducted by Idaho National Laboratory, 2012 - 2016.« less

  11. Nevada Photo-Based Inventory Pilot (NPIP) photo sampling procedures

    Treesearch

    Tracey S. Frescino; Gretchen G. Moisen; Kevin A. Megown; Val J. Nelson; Elizabeth A. Freeman; Paul L. Patterson; Mark Finco; James Menlove

    2009-01-01

    The Forest Inventory and Analysis program (FIA) of the U.S. Forest Service monitors status and trends in forested ecoregions nationwide. The complex nature of this broad-scale, strategic-level inventory demands constant evolution and evaluation of methods to get the best information possible while continuously increasing efficiency. In 2004, the "Nevada Photo-...

  12. Resource Allocation Procedure at Queensland University: A Dynamic Modelling Project.

    ERIC Educational Resources Information Center

    Galbraith, Peter L.; Carss, Brian W.

    A structural reorganization of the University of Queensland, Australia, was undertaken to promote efficient resource management, and a resource allocation model was developed to aid in policy evaluation and planning. The operation of the restructured system was based on creating five resource groups to manage the distribution of academic resources…

  13. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  14. Efficient Posterior Probability Mapping Using Savage-Dickey Ratios

    PubMed Central

    Penny, William D.; Ridgway, Gerard R.

    2013-01-01

    Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640

  15. Coarse mesh and one-cell block inversion based diffusion synthetic acceleration

    NASA Astrophysics Data System (ADS)

    Kim, Kang-Seog

    DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.

  16. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  17. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  18. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  19. Metal fractionation in olive oil and urban sewage sludges using the three-stage BCR sequential extraction method and microwave single extractions.

    PubMed

    Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E

    2001-08-01

    The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.

  20. Preliminary Evaluations of Polymer-based Lithium Battery Electrolytes Under Development for the Polymer Electrolyte Rechargeable Systems Program

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.; Bennett, William R.

    2003-01-01

    A component screening facility has been established at The NASA Glenn Research Center (GRC) to evaluate candidate materials for next generation, lithium-based, polymer electrolyte batteries for aerospace applications. Procedures have been implemented to provide standardized measurements of critical electrolyte properties. These include ionic conductivity, electronic resistivity, electrochemical stability window, cation transference number, salt diffusion coefficient and lithium plating efficiency. Preliminary results for poly(ethy1ene oxide)-based polymer electrolyte and commercial liquid electrolyte are presented.

  1. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.

  2. A Two-Stage Procedure Toward the Efficient Implementation of PANS and Other Hybrid Turbulence Models

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Girimaji, Sharath S.

    2004-01-01

    The main objective of this article is to introduce and to show the implementation of a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for Partial Averaged Navier-Stokes (PANS) and other hybrid models. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The first step is to solve the unsteady or steady Reynolds Averaged Navier-Stokes (URANS/RANS) equations. From this preprocessing step, the turbulence length-scale field is obtained. This is then used to compute the characteristic length-scale ratio between the turbulence scale and the grid spacing. Based on this ratio, we can assess the finest scale resolution that a given grid for a given flow can support. Along with other additional criteria, we are able to analytically identify the appropriate hybrid solver resolution for different regions of the flow. This procedure removes the grid dependency issue that affects the results produced by different hybrid procedures in solving unsteady flows. The formulation, implementation methodology, and validation example are presented. We implemented this capability in a production Computational Fluid Dynamics (CFD) code, PAB3D, for the simulation of unsteady flows.

  3. Recent developments in nickel electrode analysis

    NASA Technical Reports Server (NTRS)

    Whiteley, Richard V.; Daman, M. E.; Kaiser, E. Q.

    1991-01-01

    Three aspects of nickel electrode analysis for Nickel-Hydrogen and Nickel-Cadmium battery cell applications are addressed: (1) the determination of active material; (2) charged state nickel (as NiOOH + CoOOH); and (3) potassium ion content in the electrode. Four deloading procedures are compared for completeness of active material removal, and deloading conditions for efficient active material analyses are established. Two methods for charged state nickel analysis are compared: the current NASA procedure and a new procedure based on the oxidation of sodium oxalate by the charged material. Finally, a method for determining potassium content in an electrode sample by flame photometry is presented along with analytical results illustrating differences in potassium levels from vendor to vendor and the effects of stress testing on potassium content in the electrode. The relevance of these analytical procedures to electrode performance is reviewed.

  4. Precession-nutation procedures consistent with IAU 2006 resolutions

    NASA Astrophysics Data System (ADS)

    Wallace, P. T.; Capitaine, N.

    2006-12-01

    Context: .The 2006 IAU General Assembly has adopted the P03 model of Capitaine et al. (2003a) recommended by the WG on precession and the ecliptic (Hilton et al. 2006) to replace the IAU 2000 model, which comprised the Lieske et al. (1977) model with adjusted rates. Practical implementations of this new "IAU 2006" model are therefore required, involving choices of procedures and algorithms. Aims: .The purpose of this paper is to recommend IAU 2006 based precession-nutation computing procedures, suitable for different classes of application and achieving high standards of consistency. Methods: .We discuss IAU 2006 based procedures and algorithms for generating the rotation matrices that transform celestial to terrestrial coordinates, taking into account frame bias (B), P03 precession (P), P03-adjusted IAU 2000A nutation (N) and Earth rotation. The NPB portion can refer either to the equinox or to the celestial intermediate origin (CIO), requiring either the Greenwich sidereal time (GST) or the Earth rotation angle (ERA) as the measure of Earth rotation. Where GST is used, it is derived from ERA and the equation of the origins (EO) rather than through an explicit formula as in the past, and the EO itself is derived from the CIO locator. Results: .We provide precession-nutation procedures for two different classes of full-accuracy application, namely (i) the construction of algorithm collections such as the Standards Of Fundamental Astronomy (SOFA) library and (ii) IERS Conventions, and in addition some concise procedures for applications where the highest accuracy is not a requirement. The appendix contains a fully worked numerical example, to aid implementors and to illustrate the consistency of the two full-accuracy procedures which, for the test date, agree to better than 1 μas. Conclusions: .The paper recommends, for case (i), procedures based on angles to represent the PB and N components and, for case (ii), procedures based on series for the CIP X,Y. The two methods are of similar efficiency, and both support equinox based as well as CIO based applications.

  5. SU-F-T-47: MRI T2 Exclusive Based Planning Using the Endocavitary/interstitial Gynecological Benidorm Applicator: A Proposed TPS Library and Preplan Efficient Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richart, J; Otal, A; Rodriguez, S

    Purpose: ABS and GEC-ESTRO have recommended MRI T2 for image guided brachytherapy. Recently, a new applicator (Benidorm Template, TB) has been developed in our Department (Rodriguez et al 2015). TB is fully MRI compatible because the Titanium needles and it allows the use of intrauterine tandem. Currently, TPS applicators library are not currently available for non-rigid applicators in case of interstitial component as the TB.The purpose of this work is to present the development of a library for the TB, together with its use on a pre-planning technique. Both new goals allow a very efficient and exclusive T2 MRI basedmore » planning clinical TB implementation. Methods: The developed library has been implemented in Oncentra Brachytherapy TPS, version 4.3.0 (Elekta) and now is being implemented on Sagiplan v 2.0 TPS (Eckert&Ziegler BEBIG). To model the TB, free and open software named FreeCAD and MeshLab have been used. The reconstruction process is based on three inserted A-vitamin pellets together with the data provided by the free length. The implemented preplanning procedure is as follow: 1) A MRI T2 acquisition is performed with the template in place just with the vaginal cylinder (no uterine tube nor needles). 2) The CTV is drawn and the required needles are selected using a developed Java based application and 3) A post-implant MRI T2 is performed. Results: This library procedure has been successfully applied by now in 25 patients. In this work the use of the developed library will be illustrated with clinical examples. The preplanning procedure has been applied by now in 6 patients, having significant advantages: needle depth estimation, needle positions and number are optimized a priori, time saving, etc Conclusion: TB library and pre-plan techniques are feasible and very efficient and their use will be illustrated in this work.« less

  6. Quality control by HyperSpectral Imaging (HSI) in solid waste recycling: logics, algorithms and procedures

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Serranti, Silvia

    2014-03-01

    In secondary raw materials and recycling sectors, the products quality represents, more and more, the key issue to pursuit in order to be competitive in a more and more demanding market, where quality standards and products certification play a preheminent role. These goals assume particular importance when recycling actions are applied. Recovered products, resulting from waste materials, and/or dismissed products processing, are, in fact, always seen with a certain suspect. An adequate response of the industry to the market can only be given through the utilization of equipment and procedures ensuring pure, high-quality production, and efficient work and cost. All these goals can be reached adopting not only more efficient equipment and layouts, but also introducing new processing logics able to realize a full control of the handled material flow streams fulfilling, at the same time, i) an easy management of the procedures, ii) an efficient use of the energy, iii) the definition and set up of reliable and robust procedures, iv) the possibility to implement network connectivity capabilities finalized to a remote monitoring and control of the processes and v) a full data storage, analysis and retrieving. Furthermore the ongoing legislation and regulation require the implementation of recycling infrastructure characterised by high resources efficiency and low environmental impacts, both aspects being strongly linked to the waste materials and/or dismissed products original characteristics. For these reasons an optimal recycling infrastructure design primarily requires a full knowledge of the characteristics of the input waste. What previously outlined requires the introduction of a new important concept to apply in solid waste recycling, the recycling-oriented characterization, that is the set of actions addressed to strategically determine selected attributes, in order to get goaloriented data on waste for the development, implementation or improvement of recycling strategies. The problems arising when suitable HyperSpectral Imaging (HSI) based procedures have to be developed and implemented to solid waste products characterization, in order to define time efficient compression and interpretation techniques, are thus analyzed and discussed in the following. Particular attention was also addressed to define an integrated hardware and software (HW and SW) platform able to perform a non-intrusive, non-contact and real-time analysis and embedding a core of analytical logics and procedures to utilize both at laboratory and industrial scale. Several case studies, referred to waste plastics products, are presented and discussed.

  7. Making a mixed-model line more efficient and flexible by introducing a bypass line

    NASA Astrophysics Data System (ADS)

    Matsuura, Sho; Matsuura, Haruki; Asada, Akiko

    2017-04-01

    This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.

  8. Lightning Impacts on Airports - Challenges of Balancing Safety & Efficiency

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias; Deierling, Wiebke; Nelson, Eric; Stone, Ken

    2013-04-01

    Thunderstorms and lightning pose a safety risk to personnel working outdoors, such as people maintaining airport grounds (e.g., mowing grass or repairing runway lighting) or servicing aircraft on ramps (handling baggage, food service, refueling, tugging and guiding aircraft from/to gates, etc.). Since lightning strikes can cause serious injuries or death, it is important to provide timely alerts to airport personnel so that they can get to safety when lightning is imminent. This presentation discusses the challenges and uncertainties involved in using lightning information and stakeholder procedures to ensure safety of outdoor personnel while keeping ramp operations as efficient as possible considering thunderstorm impacts. The findings presented are based on extensive observations of airline operators under thunderstorm impacts. These observations reveal a complex picture with substantial uncertainties related to the (1) source of lightning information (e.g., sensor type, network, data processing) used to base ramp closure decisions on, (2) uncertainties involved in the safety procedures employed by various stakeholders across the aviation industry (yielding notably different rules being applied by multiple airlines even at a single airport), and (3) human factors issues related to the use of decision support tools and the implementation of safety procedures. This research is supported by the United States Federal Aviation Administration (FAA). The views expressed are those of the authors and do not necessarily represent the official policy or position of the FAA.

  9. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  10. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  11. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  12. Knowledge discovery from data and Monte-Carlo DEA to evaluate technical efficiency of mental health care in small health areas

    PubMed Central

    García-Alonso, Carlos; Pérez-Naranjo, Leonor

    2009-01-01

    Introduction Knowledge management, based on information transfer between experts and analysts, is crucial for the validity and usability of data envelopment analysis (DEA). Aim To design and develop a methodology: i) to assess technical efficiency of small health areas (SHA) in an uncertainty environment, and ii) to transfer information between experts and operational models, in both directions, for improving expert’s knowledge. Method A procedure derived from knowledge discovery from data (KDD) is used to select, interpret and weigh DEA inputs and outputs. Based on KDD results, an expert-driven Monte-Carlo DEA model has been designed to assess the technical efficiency of SHA in Andalusia. Results In terms of probability, SHA 29 is the most efficient being, on the contrary, SHA 22 very inefficient. 73% of analysed SHA have a probability of being efficient (Pe) >0.9 and 18% <0.5. Conclusions Expert knowledge is necessary to design and validate any operational model. KDD techniques make the transfer of information from experts to any operational model easy and results obtained from the latter improve expert’s knowledge.

  13. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    PubMed

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  14. Rodent model of activity-based anorexia.

    PubMed

    Carrera, Olaia; Fraga, Ángela; Pellón, Ricardo; Gutiérrez, Emilio

    2014-04-10

    Activity-based anorexia (ABA) consists of a procedure that involves the simultaneous exposure of animals to a restricted feeding schedule, while free access is allowed to an activity wheel. Under these conditions, animals show a progressive increase in wheel running, a reduced efficiency in food intake to compensate for their increased activity, and a severe progression of weight loss. Due to the parallelism with the clinical manifestations of anorexia nervosa including increased activity, reduced food intake and severe weight loss, the ABA procedure has been proposed as the best analog of human anorexia nervosa (AN). Thus, ABA research could both allow a better understanding of the mechanisms underlying AN and generate useful leads for treatment development in AN. Copyright © 2014 John Wiley & Sons, Inc.

  15. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  16. Energy efficiency technologies in cement and steel industry

    NASA Astrophysics Data System (ADS)

    Zanoli, Silvia Maria; Cocchioni, Francesco; Pepe, Crescenzo

    2018-02-01

    In this paper, Advanced Process Control strategies aimed at energy efficiency achievement and improvement in cement and steel industry are proposed. A flexible and smart control structure constituted by several functional modules and blocks has been developed. The designed control strategy is based on Model Predictive Control techniques, formulated on linear models. Two industrial control solutions have been developed, oriented to energy efficiency and process control improvement in cement industry clinker rotary kilns (clinker production phase) and in steel industry billets reheating furnaces. Tailored customization procedures for the design of ad hoc control systems have been executed, based on the specific needs and specifications of the analysed processes. The installation of the developed controllers on cement and steel plants produced significant benefits in terms of process control which resulted in working closer to the imposed operating limits. With respect to the previous control systems, based on local controllers and/or operators manual conduction, more profitable configurations of the crucial process variables have been provided.

  17. Precise non-steady-state characterization of solid active materials with no preliminary mechanistic assumptions

    DOE PAGES

    Constales, Denis; Yablonsky, Gregory S.; Wang, Lucun; ...

    2017-04-25

    This paper presents a straightforward and user-friendly procedure for extracting a reactivity characterization of catalytic reactions on solid materials under non-steady-state conditions, particularly in temporal analysis of products (TAP) experiments. The kinetic parameters derived by this procedure can help with the development of detailed mechanistic understanding. The procedure consists of the following two major steps: 1) Three “Laplace reactivities” are first determined based on the moments of the exit flow pulse response data; 2) Depending on a select kinetic model, kinetic constants of elementary reaction steps can then be expressed as a function of reactivities and determined accordingly. In particular,more » we distinguish two calculation methods based on the availability and reliability of reactant and product data. The theoretical results are illustrated using a reverse example with given parameters as well as an experimental example of CO oxidation over a supported Au/SiO 2 catalyst. The procedure presented here provides an efficient tool for kinetic characterization of many complex chemical reactions.« less

  18. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  19. A design procedure for a tension-wire stiffened truss-column

    NASA Technical Reports Server (NTRS)

    Greene, W. H.

    1980-01-01

    A deployable, tension wire stiffened, truss column configuration was considered for space structure applications. An analytical procedure, developed for design of the truss column and exercised in numerical studies, was based on equivalent beam stiffness coefficients in the classical analysis for an initially imperfect beam column. Failure constraints were formulated to be used in a combined weight/strength and nonlinear mathematical programming automated design procedure to determine the minimum mass column for a particular combination of design load and length. Numerical studies gave the mass characteristics of the truss column for broad ranges of load and length. Comparisons of the truss column with a baseline tubular column used a special structural efficiency parameter for this class of columns.

  20. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  1. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  2. Rapid one-step recombinational cloning

    PubMed Central

    Fu, Changlin; Wehr, Daniel R.; Edwards, Janice; Hauge, Brian

    2008-01-01

    As an increasing number of genes and open reading frames of unknown function are discovered, expression of the encoded proteins is critical toward establishing function. Accordingly, there is an increased need for highly efficient, high-fidelity methods for directional cloning. Among the available methods, site-specific recombination-based cloning techniques, which eliminate the use of restriction endonucleases and ligase, have been widely used for high-throughput (HTP) procedures. We have developed a recombination cloning method, which uses truncated recombination sites to clone PCR products directly into destination/expression vectors, thereby bypassing the requirement for first producing an entry clone. Cloning efficiencies in excess of 80% are obtained providing a highly efficient method for directional HTP cloning. PMID:18424799

  3. Detection of SEA-type α-thalassemia in embryo biopsies by digital PCR.

    PubMed

    Lee, Ta-Hsien; Hsu, Ya-Chiung; Chang, Chia Lin

    2017-08-01

    Accurate and efficient pre-implantation genetic diagnosis (PGD) based on the analysis of single or oligo-cells is needed for timely identification of embryos that are affected by deleterious genetic traits in in vitro fertilization (IVF) clinics. Polymerase chain reaction (PCR) is the backbone of modern genetic diagnoses, and a spectrum of PCR-based techniques have been used to detect various thalassemia mutations in prenatal diagnosis (PND) and PGD. Among thalassemias, SEA-type α-thalassemia is the most common variety found in Asia, and can lead to Bart's hydrops fetalis and serious maternal complications. To formulate an efficient digital PCR for clinical diagnosis of SEA-type α-thalassemia in cultured embryos, we conducted a pilot study to detect the α-globin and SEA-type deletion alleles in blastomere biopsies with a highly sensitive microfluidics-based digital PCR method. Genomic DNA from embryo biopsy samples were extracted, and crude DNA extracts were first amplified by a conventional PCR procedure followed by a nested PCR reaction with primers and probes that are designed for digital PCR amplification. Analysis of microfluidics-based PCR reactions showed that robust signals for normal α-globin and SEA-type deletion alleles, together with an internal control gene, can be routinely generated using crude embryo biopsies after a 10 6 -fold dilution of primary PCR products. The SEA-type deletion in cultured embryos can be sensitively diagnosed with the digital PCR procedure in clinics. The adoption of this robust PGD method could prevent the implantation of IVF embryos that are destined to develop Bart's hydrops fetalis in a timely manner. The results also help inform future development of a standard digital PCR procedure for cost-effective PGD of α-thalassemia in a standard IVF clinic. Copyright © 2017. Published by Elsevier B.V.

  4. Safety and cost-effectiveness of bridge therapies for invasive dental procedures in patients with mechanical heart valves.

    PubMed

    Won, Ki-Bum; Lee, Seung-Hyun; Chang, Hyuk-Jae; Shim, Chi-Young; Hong, Gue-Ru; Ha, Jong-Won; Chung, Namsik

    2014-07-01

    Bridge anticoagulation therapy is mostly utilized in patients with mechanical heart valves (MHV) receiving warfarin therapy during invasive dental procedures because of the risk of excessive bleeding related to highly vascular supporting dental structures. Bridge therapy using low molecular weight heparin may be an attractive option for invasive dental procedures; however, its safety and cost-effectiveness compared with unfractionated heparin (UFH) is uncertain. This study investigated the safety and cost-effectiveness of enoxaparin in comparison to UFH for bridge therapy in 165 consecutive patients (57±11 years, 35% men) with MHV who underwent invasive dental procedures. This study included 75 patients treated with UFH-based bridge therapy (45%) and 90 patients treated with enoxaparin-based bridge therapy (55%). The bleeding risk of dental procedures and the incidence of clinical adverse outcomes were not significantly different between the UFH group and the enoxaparin group. However, total medical costs were significantly lower in the enoxaparin group than in the UFH group (p<0.001). After multivariate adjustment, old age (≥65 years) was significantly associated with an increased risk of total bleeding independent of bridging methods (odds ratio, 2.51; 95% confidence interval, 1.15-5.48; p=0.022). Enoxaparin-based bridge therapy (β=-0.694, p<0.001) and major bleeding (β=0.296, p=0.045) were significantly associated with the medical costs within 30 days after dental procedures. Considering the benefit of enoxaparin in cost-effectiveness, enoxaparin may be more efficient than UFH for bridge therapy in patients with MHV who required invasive dental procedures.

  5. Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.

    PubMed

    Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang

    2016-12-14

    An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.

  6. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  7. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  8. An Efficient User Interface Design for Nursing Information System Based on Integrated Patient Order Information.

    PubMed

    Chu, Chia-Hui; Kuo, Ming-Chuan; Weng, Shu-Hui; Lee, Ting-Ting

    2016-01-01

    A user friendly interface can enhance the efficiency of data entry, which is crucial for building a complete database. In this study, two user interfaces (traditional pull-down menu vs. check boxes) are proposed and evaluated based on medical records with fever medication orders by measuring the time for data entry, steps for each data entry record, and the complete rate of each medical record. The result revealed that the time for data entry is reduced from 22.8 sec/record to 3.2 sec/record. The data entry procedures also have reduced from 9 steps in the traditional one to 3 steps in the new one. In addition, the completeness of medical records is increased from 20.2% to 98%. All these results indicate that the new user interface provides a more user friendly and efficient approach for data entry than the traditional interface.

  9. Principles for Designing Mathematical Tasks That Enhance Imitative and Creative Reasoning

    ERIC Educational Resources Information Center

    Lithner, Johan

    2017-01-01

    The design research programme learning by imitative and creative reasoning (LICR) studies whether, how and why tasks and teaching that enhance creative reasoning lead to a more productive struggle and more efficient learning than the common but inefficient task designs based on imitating given solution procedures. The purpose of this paper is to…

  10. A Facile Oxidation of Alcohols Using Pyridinium Chlorochromate/Silica Gel

    NASA Astrophysics Data System (ADS)

    Luzzio, Frederick A.; Fitch, Richard W.; Moore, William J.; Mudd, Kelli J.

    1999-07-01

    An efficient and convenient adaptation of the pyridinium chlorochromate (PCC) oxidation for an organic chemistry student exercise is based on the employment of reagent-grade silica gel, which simplifies workup and purification of the product. The procedures include the oxidation of 4-tert-butylcyclohexanol to 4-tert-butylcyclohexanone and d,l-menthol to d,l-menthone.

  11. An Evaluation of the Utility and Cost of Computerized Library Catalogs. Final Report.

    ERIC Educational Resources Information Center

    Dolby, J.L.; And Others

    This study analyzes the basic cost factors in the automation of library catalogs, with a separate examination of the influence of typography on the cost of printed catalogs and the use of efficient automatic error detection procedures in processing bibliographic records. The utility of automated catalogs is also studied, based on data from a…

  12. Agreement of Function Across Methods Used in School-Based Functional Assessment with Preadolescent and Adolescent Students

    ERIC Educational Resources Information Center

    Kwak, Meg M.; Ervin, Ruth A.; Anderson, Mary Z.; Austin, John

    2004-01-01

    As we begin to apply functional assessment procedures in mainstream educational settings, there is a need to explore options for identifying behavior function that are not only effective but efficient and practical for school personnel to employ. Attempts to simplify the functional assessment process are evidenced by the development of informant…

  13. Comparison of Three Tobacco Survey Methods with College Students: A Case Study

    ERIC Educational Resources Information Center

    James, Delores C. S.; Chen, W. William; Sheu, Jiunn-Jye

    2005-01-01

    The goals of this case study were to: (1) determine the efficiency and effectiveness of three survey methods--postal mail survey, web-based survey, and random in-class administration survey--in assessing tobacco-related attitudes and behaviors among college students and (2) compare the response rate and procedures of these three methods. There was…

  14. Diagnostic Accuracy of Multivariate Universal Screening Procedures for Reading in Upper Elementary Grades

    ERIC Educational Resources Information Center

    Klingbeil, David A.; Nelson, Peter M.; Van Norman, Ethan R.; Birr, Chris

    2017-01-01

    We examined the diagnostic accuracy and efficiency of three approaches to universal screening for reading difficulties using retrospective data from 1,307 students in Grades 3 through 5. School staff collected screening data using the Measures of Academic Progress (MAP), a curriculum-based measure (CBM), and running records (RR). The criterion…

  15. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    NASA Astrophysics Data System (ADS)

    S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr

    2014-03-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.

  16. Design Guidelines for High-Performance Particle-Based Photoanodes for Water Splitting: Lanthanum Titanium Oxynitride as a Model.

    PubMed

    Landsmann, Steve; Maegli, Alexandra E; Trottmann, Matthias; Battaglia, Corsin; Weidenkaff, Anke; Pokrant, Simone

    2015-10-26

    Semiconductor powders are perfectly suited for the scalable fabrication of particle-based photoelectrodes, which can be used to split water using the sun as a renewable energy source. This systematic study is focused on variation of the electrode design using LaTiO2 N as a model system. We present the influence of particle morphology on charge separation and transport properties combined with post-treatment procedures, such as necking and size-dependent co-catalyst loading. Five rules are proposed to guide the design of high-performance particle-based photoanodes by adding or varying several process steps. We also specify how much efficiency improvement can be achieved using each of the steps. For example, implementation of a connectivity network and surface area enhancement leads to thirty times improvement in efficiency and co-catalyst loading achieves an improvement in efficiency by a factor of seven. Some of these guidelines can be adapted to non-particle-based photoelectrodes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  18. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks

    PubMed Central

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-01-01

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666

  19. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks.

    PubMed

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-06-26

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.

  20. Fast ground filtering for TLS data via Scanline Density Analysis

    NASA Astrophysics Data System (ADS)

    Che, Erzhuo; Olsen, Michael J.

    2017-07-01

    Terrestrial Laser Scanning (TLS) efficiently collects 3D information based on lidar (light detection and ranging) technology. TLS has been widely used in topographic mapping, engineering surveying, forestry, industrial facilities, cultural heritage, and so on. Ground filtering is a common procedure in lidar data processing, which separates the point cloud data into ground points and non-ground points. Effective ground filtering is helpful for subsequent procedures such as segmentation, classification, and modeling. Numerous ground filtering algorithms have been developed for Airborne Laser Scanning (ALS) data. However, many of these are error prone in application to TLS data because of its different angle of view and highly variable resolution. Further, many ground filtering techniques are limited in application within challenging topography and experience difficulty coping with some objects such as short vegetation, steep slopes, and so forth. Lastly, due to the large size of point cloud data, operations such as data traversing, multiple iterations, and neighbor searching significantly affect the computation efficiency. In order to overcome these challenges, we present an efficient ground filtering method for TLS data via a Scanline Density Analysis, which is very fast because it exploits the grid structure storing TLS data. The process first separates the ground candidates, density features, and unidentified points based on an analysis of point density within each scanline. Second, a region growth using the scan pattern is performed to cluster the ground candidates and further refine the ground points (clusters). In the experiment, the effectiveness, parameter robustness, and efficiency of the proposed method is demonstrated with datasets collected from an urban scene and a natural scene, respectively.

  1. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  2. 10 CFR 435.305 - Alternative compliance procedure.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Alternative compliance procedure. 435.305 Section 435.305 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY STANDARDS FOR NEW FEDERAL LOW-RISE RESIDENTIAL BUILDINGS Mandatory Energy Efficiency Standards for Federal Residential Buildings § 435.305...

  3. Possible world based consistency learning model for clustering and classifying uncertain data.

    PubMed

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Locating Structural Centers: A Density-Based Clustering Method for Community Detection

    PubMed Central

    Liu, Gongshen; Li, Jianhua; Nees, Jan P.

    2017-01-01

    Uncovering underlying community structures in complex networks has received considerable attention because of its importance in understanding structural attributes and group characteristics of networks. The algorithmic identification of such structures is a significant challenge. Local expanding methods have proven to be efficient and effective in community detection, but most methods are sensitive to initial seeds and built-in parameters. In this paper, we present a local expansion method by density-based clustering, which aims to uncover the intrinsic network communities by locating the structural centers of communities based on a proposed structural centrality. The structural centrality takes into account local density of nodes and relative distance between nodes. The proposed algorithm expands a community from the structural center to the border with a single local search procedure. The local expanding procedure follows a heuristic strategy as allowing it to find complete community structures. Moreover, it can identify different node roles (cores and outliers) in communities by defining a border region. The experiments involve both on real-world and artificial networks, and give a comparison view to evaluate the proposed method. The result of these experiments shows that the proposed method performs more efficiently with a comparative clustering performance than current state of the art methods. PMID:28046030

  5. Interprofessional and interdisciplinary simulation-based training leads to safe sedation procedures in the emergency department.

    PubMed

    Sauter, Thomas C; Hautz, Wolf E; Hostettler, Simone; Brodmann-Maeder, Monika; Martinolli, Luca; Lehmann, Beat; Exadaktylos, Aristomenis K; Haider, Dominik G

    2016-08-02

    Sedation is a procedure required for many interventions in the Emergency department (ED) such as reductions, surgical procedures or cardioversions. However, especially under emergency conditions with high risk patients and rapidly changing interdisciplinary and interprofessional teams, the procedure caries important risks. It is thus vital but difficult to implement a standard operating procedure for sedation procedures in any ED. Reports on both, implementation strategies as well as their success are currently lacking. This study describes the development, implementation and clinical evaluation of an interprofessional and interdisciplinary simulation-based sedation training concept. All physicians and nurses with specialised training in emergency medicine at the Berne University Department of Emergency Medicine participated in a mandatory interdisciplinary and interprofessional simulation-based sedation training. The curriculum consisted of an individual self-learning module, an airway skill training course, three simulation-based team training cases, and a final practical learning course in the operating theatre. Before and after each training session, self-efficacy, awareness of emergency procedures, knowledge of sedation medication and crisis resource management were assessed with a questionnaire. Changes in these measures were compared via paired tests, separately for groups formed based on experience and profession. To assess the clinical effect of training, we collected patient and team satisfaction as well as duration and complications for all sedations in the ED within the year after implementation. We further compared time to beginning of procedure, time for duration of procedure and time until discharge after implementation with the one year period before the implementation. Cohen's d was calculated as effect size for all statistically significant tests. Fifty staff members (26 nurses and 24 physicians) participated in the training. In all subgroups, there is a significant increase in self-efficacy and knowledge with high effect size (d z  = 1.8). The learning is independent of profession and experience level. In the clinical evaluation after implementation, we found no major complications among the sedations performed. Time to procedure significantly improved after the introduction of the training (d = 0.88). Learning is independent of previous working experience and equally effective in raising the self-efficacy and knowledge in all professional groups. Clinical outcome evaluation confirms the concepts safety and feasibility. An interprofessional and interdisciplinary simulation-based sedation training is an efficient way to implement a conscious sedation concept in an ED.

  6. CT and MRI slice separation evaluation by LabView developed software.

    PubMed

    Acri, Giuseppe; Testagrossa, Barbara; Sestito, Angela; Bonanno, Lilla; Vermiglio, Giuseppe

    2018-02-01

    The efficient use of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice separation, during multislices acquisition, requires scan exploration of phantoms containing test objects. To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination the midpoint of full width at half maximum (FWHM) in real time while the distance from the profile midpoint of two progressive images is evaluated and measured. The results were compared with those obtained by processing the same phantom images with commercial software. To validate the proposed methodology the Fisher test was conducted on the resulting data sets. In all cases, there was no statistically significant variation between the commercial procedure and the LabView one, which can be used on any CT and MRI diagnostic devices. Copyright © 2017. Published by Elsevier GmbH.

  7. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  8. An Attempt at Quantifying Factors that Affect Efficiency in the Management of Solid Waste Produced by Commercial Businesses in the City of Tshwane, South Africa

    PubMed Central

    Worku, Yohannes; Muchie, Mammo

    2012-01-01

    Objective. The objective was to investigate factors that affect the efficient management of solid waste produced by commercial businesses operating in the city of Pretoria, South Africa. Methods. Data was gathered from 1,034 businesses. Efficiency in solid waste management was assessed by using a structural time-based model designed for evaluating efficiency as a function of the length of time required to manage waste. Data analysis was performed using statistical procedures such as frequency tables, Pearson's chi-square tests of association, and binary logistic regression analysis. Odds ratios estimated from logistic regression analysis were used for identifying key factors that affect efficiency in the proper disposal of waste. Results. The study showed that 857 of the 1,034 businesses selected for the study (83%) were found to be efficient enough with regards to the proper collection and disposal of solid waste. Based on odds ratios estimated from binary logistic regression analysis, efficiency in the proper management of solid waste was significantly influenced by 4 predictor variables. These 4 influential predictor variables are lack of adherence to waste management regulations, wrong perception, failure to provide customers with enough trash cans, and operation of businesses by employed managers, in a decreasing order of importance. PMID:23209483

  9. A joint Richardson-Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data.

    PubMed

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-16

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  10. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    NASA Astrophysics Data System (ADS)

    Ströhl, Florian; Kaminski, Clemens F.

    2015-03-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  11. Differential results integrated with continuous and discrete gravity measurements between nearby stations

    NASA Astrophysics Data System (ADS)

    Xu, Weimin; Chen, Shi; Lu, Hongyan

    2016-04-01

    Integrated gravity is an efficient way in studying spatial and temporal characteristics of the dynamics and tectonics. Differential measurements based on the continuous and discrete gravity observations shows highly competitive in terms of both efficiency and precision with single result. The differential continuous gravity variation between the nearby stations, which is based on the observation of Scintrex g-Phone relative gravimeters in every single station. It is combined with the repeated mobile relative measurements or absolute results to study the regional integrated gravity changes. Firstly we preprocess the continuous records by Tsoft software, and calculate the theoretical earth tides and ocean tides by "MT80TW" program through high precision tidal parameters from "WPARICET". The atmospheric loading effects and complex drift are strictly considered in the procedure. Through above steps we get the continuous gravity in every station and we can calculate the continuous gravity variation between nearby stations, which is called the differential continuous gravity changes. Then the differential results between related stations is calculated based on the repeated gravity measurements, which are carried out once or twice every year surrounding the gravity stations. Hence we get the discrete gravity results between the nearby stations. Finally, the continuous and discrete gravity results are combined in the same related stations, including the absolute gravity results if necessary, to get the regional integrated gravity changes. This differential gravity results is more accurate and effective in dynamical monitoring, regional hydrologic effects studying, tectonic activity and other geodynamical researches. The time-frequency characteristics of continuous gravity results are discussed to insure the accuracy and efficiency in the procedure.

  12. Report: EPA Needs Policies and Procedures to Manage Public Pesticide Petitions in a Transparent and Efficient Manner

    EPA Pesticide Factsheets

    Report #16-P-0019, October 27, 2015. OPP’s lack of policies and procedures to manage public pesticide petitions in a transparent and efficient manner can result in unreasonable delay lawsuits costing the agency time and resources.

  13. Evaluation of economic efficiencies in clinical retina practice: activity-based cost analysis and modeling to determine impacts of changes in patient management

    PubMed Central

    Murray, Timothy G; Tornambe, Paul; Dugel, Pravin; Tong, Kuo Bianchini

    2011-01-01

    Background The purpose of this study is to report the use of activity-based cost analysis to identify areas of practice efficiencies and inefficiencies within a large academic retinal center and a small single-specialty group. This analysis establishes a framework for evaluating rapidly shifting clinical practices (anti-vascular endothelial growth factor therapy, microincisional vitrectomy surgery) and incorporating changing reimbursements for care delivery (intravitreal injections, optical coherence tomography [OCT]) to determine the impact on practice profitability. Pro forma modeling targeted the impact of declining reimbursement for OCT imaging and intravitreal injection using a strategy that incorporates activity-based cost analysis into a direct evaluation schema for clinical operations management. Methods Activity-based costing analyses were performed at two different types of retinal practices in the US, ie, a small single-specialty group practice and an academic hospital-based practice (Bascom Palmer Eye Institute). Retrospective claims data were utilized to identify all procedures performed and billed, submitted charges, allowed charges, and net collections from each of these two practices for the calendar years 2005–2006 and 2007–2008. A pro forma analysis utilizing current reimbursement profiles was performed to determine the impact of altered reimbursement on practice profitability. All analyses were performed by a third party consulting firm. Results The small single-specialty group practice outperformed the academic hospital-based practice on almost all markers of efficiency. In the academic hospital-based practice, only four service lines were profitable, ie, nonlaser surgery, laser surgery, non-OCT diagnostics, and injections. Profit margin varied from 62% for nonlaser surgery to 1% for intravitreal injections. Largest negative profit contributions were associated with office visits and OCT imaging. Conclusion Activity-based cost analysis is a powerful tool to evaluate retinal practice efficiencies. These two distinct practices were able to provide significant increases in clinical care (office visits, ophthalmic imaging, and patient procedures) through maintaining efficiencies of care. Pro forma analysis of 2011 data noted that OCT payments to facilities and physicians continue to decrease dramatically and that this payment decrease further reduced the profitability for the two largest aspects of these retinal practices, ie, intravitreal injections and OCT retinal imaging. Ultimately, all retinal practices are at risk for significant shifts in financial health related to rapidly evolving changes in patterns of care and reimbursement associated with providing outstanding clinical care. PMID:21792278

  14. A hybrid structure for the storage and manipulation of very large spatial data sets

    USGS Publications Warehouse

    Peuquet, Donna J.

    1982-01-01

    The map data input and output problem for geographic information systems is rapidly diminishing with the increasing availability of mass digitizing, direct spatial data capture and graphics hardware based on raster technology. Although a large number of efficient raster-based algorithms exist for performing a wide variety of common tasks on these data, there are a number of procedures which are more efficiently performed in vector mode or for which raster mode equivalents of current vector-based techniques have not yet been developed. This paper presents a hybrid spatial data structure, named the ?vaster' structure, which can utilize the advantages of both raster and vector structures while potentially eliminating, or greatly reducing, the need for raster-to-vector and vector-to-raster conversion. Other advantages of the vaster structure are also discussed.

  15. An experimental and theoretical study of reaction mechanisms between nitriles and hydroxylamine.

    PubMed

    Vörös, Attila; Mucsi, Zoltán; Baán, Zoltán; Timári, Géza; Hermecz, István; Mizsey, Péter; Finta, Zoltán

    2014-10-28

    The industrially relevant reaction between nitriles and hydroxylamine yielding amidoximes was studied in different molecular solvents and in ionic liquids. In industry, this procedure is carried out on the ton scale in alcohol solutions and the above transformation produces a significant amount of unexpected amide by-product, depending on the nature of the nitrile, which can cause further analytical and purification issues. Although there were earlier attempts to propose mechanisms for this transformation, the real reaction pathway is still under discussion. A new detailed reaction mechanistic explanation, based on theoretical and experimental proof, is given to augment the former mechanisms, which allowed us to find a more efficient, side-product free procedure. Interpreting the theoretical results obtained, it was shown that the application of specific imidazolium, phosphonium and quaternary ammonium based ionic liquids could decrease simultaneously the reaction time while eliminating the amide side-product, leading to the targeted product selectively. This robust and economic procedure now affords a fast, selective amide free synthesis of amidoximes.

  16. Evaluation of the effectiveness and efficiency of two stimulus prompt strategies with severely handicapped students.

    PubMed Central

    Steege, M W; Wacker, D P; McMahon, C M

    1987-01-01

    In this study we compared the effectiveness and efficiency of two treatment packages that used stimulus prompt sequences and task analyses for teaching community living skills to severely handicapped students. Four severely and multiply handicapped students were trained to perform four tasks: (a) making toast, (b) making popcorn, (c) operating a clothes dryer, and (d) operating a washing machine. Following baseline, each student was exposed to two types of training procedures, each involving a task analysis of the target behavior. Training Procedure 1 (Traditional) utilized a least-to-most restrictive prompt sequence. Training Procedure 2 (Prescriptive) utilized ongoing behavioral assessment data to identify discriminative stimuli. The assessment data were used to prescribe instructional prompts across successive training trials. Performance on the tasks was evaluated within a combination multiple baseline (across subjects) and probe (across tasks) design. Training conditions were counterbalanced across subjects and tasks. Results indicated that both training procedures were equally effective in increasing independent task acquisition for subjects on all tasks; however, the prescriptive procedure was the more efficient procedure. PMID:3667479

  17. Optimisation of solid-phase microextraction coupled to HPLC-UV for the determination of organochlorine pesticides and their metabolites in environmental liquid samples.

    PubMed

    Torres Padrón, M E; Sosa Ferrera, Z; Santana Rodríguez, J J

    2006-09-01

    A solid-phase microextraction (SPME) procedure using two commercial fibers coupled with high-performance liquid chromatography (HPLC) is presented for the extraction and determination of organochlorine pesticides in water samples. We have evaluated the extraction efficiency of this kind of compound using two different fibers: 60-mum polydimethylsiloxane-divinylbenzene (PDMS-DVB) and Carbowax/TPR-100 (CW/TPR). Parameters involved in the extraction and desorption procedures (e.g. extraction time, ionic strength, extraction temperature, desorption and soaking time) were studied and optimized to achieve the maximum efficiency. Results indicate that both PDMS-DVB and CW/TPR fibers are suitable for the extraction of this type of compound, and a simple calibration curve method based on simple aqueous standards can be used. All the correlation coefficients were better than 0.9950, and the RSDs ranged from 7% to 13% for 60-mum PDMS-DVB fiber and from 3% to 10% for CW/TPR fiber. Optimized procedures were applied to the determination of a mixture of six organochlorine pesticides in environmental liquid samples (sea, sewage and ground waters), employing HPLC with UV-diode array detector.

  18. Developmental dissociation in the neural responses to simple multiplication and subtraction problems

    PubMed Central

    Prado, Jérôme; Mutreja, Rachna; Booth, James R.

    2014-01-01

    Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a cross-sectional design to measure the neural activity associated with single-digit subtraction and multiplication in 34 children from 2nd to 7th grade. The neural correlates of language and numerical processing were also identified in each child via localizer scans. Although multiplication and subtraction were undistinguishable in terms of behavior, we found a striking developmental dissociation in their neural correlates. First, we observed grade-related increases of activity for multiplication, but not for subtraction, in a language-related region of the left temporal cortex. Second, we found grade-related increases of activity for subtraction, but not for multiplication, in a region of the right parietal cortex involved in the procedural manipulation of numerical quantities. The present results suggest that fluency in simple arithmetic in children may be achieved by both increasing reliance on verbal retrieval and by greater use of efficient quantity-based procedures, depending on the operation. PMID:25089323

  19. An efficient and numerically stable procedure for generating sextic force fields in normal mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz

    In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less

  20. KommonBase - A precise direct bonding system for labial fixed appliances.

    PubMed

    Miyashita, Wataru; Komori, Akira; Takemoto, Kyoto

    2017-09-01

    "KommonBase" is a system designed to customize the bracket base by means of an extended resin base covering the tooth. This system enables precise bracket placement and accurate fit on teeth. Moreover, KommonBase can be easily fabricated in a laboratory and bonded on each tooth using simple clinical procedures. Straight-wire treatment without wire bending was achieved in the clinical cases presented in this article using the KommonBase system for a labial fixed appliance. The application of KommonBase to the vestibular side enables efficient orthodontic treatment using simple mechanics. Copyright © 2017 CEO. Published by Elsevier Masson SAS. All rights reserved.

  1. Case mix-adjusted cost of colectomy at low-, middle-, and high-volume academic centers.

    PubMed

    Chang, Alex L; Kim, Young; Ertel, Audrey E; Hoehn, Richard S; Wima, Koffi; Abbott, Daniel E; Shah, Shimul A

    2017-05-01

    Efforts to regionalize surgery based on thresholds in procedure volume may have consequences on the cost of health care delivery. This study aims to delineate the relationship between hospital volume, case mix, and variability in the cost of operative intervention using colectomy as the model. All patients undergoing colectomy (n = 90,583) at 183 academic hospitals from 2009-2012 in The University HealthSystems Consortium Database were studied. Patient and procedure details were used to generate a case mix-adjusted predictive model of total direct costs. Observed to expected costs for each center were evaluated between centers based on overall procedure volume. Patient and procedure characteristics were significantly different between volume tertiles. Observed costs at high-volume centers were less than at middle- and low-volume centers. According to our predictive model, high-volume centers cared for a less expensive case mix than middle- and low-volume centers ($12,786 vs $13,236 and $14,497, P < .01). Our predictive model accounted for 44% of the variation in costs. Overall efficiency (standardized observed to expected costs) was greatest at high-volume centers compared to middle- and low-volume tertiles (z score -0.16 vs 0.02 and -0.07, P < .01). Hospital costs and cost efficiency after an elective colectomy varies significantly between centers and may be attributed partially to the patient differences at those centers. These data demonstrate that a significant proportion of the cost variation is due to a distinct case mix at low-volume centers, which may lead to perceived poor performance at these centers. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Effect of K-N-humates on dry matter production and nutrient use efficiency of maize in Sarawak, Malaysia.

    PubMed

    Petrus, Auldry Chaddy; Ahmed, Osumanu Haruna; Muhamad, Ab Majid Nik; Nasir, Hassan Mohammad; Jiwan, Make

    2010-07-06

    Agricultural waste, such as sago waste (SW), is one of the sources of pollution to streams and rivers in Sarawak, particularly those situated near sago processing plants. In addition, unbalanced and excessive use of chemical fertilizers can cause soil and water pollution. Humic substances can be used as organic fertilizers, which reduce pollution. The objectives of this study were to produce K- and ammonium-based organic fertilizer from composted SW and to determine the efficiency of the organic-based fertilizer produced. Humic substances were isolated using standard procedures. Liquid fertilizers were formulated except for T2 (NPK fertilizer), which was in solid form. There were six treatments with three replications. Organic fertilizers were applied to soil in pots on the 10th day after sowing (DAS), but on the 28th DAS, only plants of T2 were fertilized. The plant samples were harvested on the 57th DAS during the tassel stage. The dry matter of plant parts (leaves, stems, and roots) were determined and analyzed for N, P, and K using standard procedures. Soil of every treatment was also analyzed for exchangeable K, Ca, Mg, and Na, organic matter, organic carbon, available P, pH, total N, P, nitrate and ammonium contents using standard procedures. Treatments with humin (T5 and T6) showed remarkable results on dry matter production; N, P, and K contents; their uptake; as well as their use efficiency by maize. The inclusion of humin might have loosened the soil and increased the soil porosity, hence the better growth of the plants. Humin plus inorganic fertilizer provided additional nutrients for the plants. The addition of inorganic fertilizer into compost is a combination of quick and slow release sources, which supplies N throughout the crop growth period. Common fertilization by surface application of T2 without any additives (acidic and high CEC materials) causes N and K to be easily lost. High Ca in the soil may have reacted with phosphate from fertilizer to form Ca phosphate, an insoluble compound of phosphate that is generally not available to plants, especially roots. Mixing soil with humin produced from composted SW before application of fertilizers (T5 and T6) significantly increased maize dry matter production and nutrient use efficiency. Additionally, this practice does not only improve N, P, and K use efficiency, but it also helps to reduce the use of N-, P-, and K-based fertilizers by 50%.

  3. Pattern of informed consent acquisition in patients undergoing emergent endovascular treatment for acute ischemic stroke

    PubMed Central

    Qureshi, Adnan I; Gilani, Sarwat; Adil, Malik M; Majidi, Shahram; Hassan, Ameer E; Miley, Jefferson T; Rodriguez, Gustavo J

    2014-01-01

    Background Telephone consent and two physician consents based on medical necessity are alternate strategies for time sensitive medical decisions but are not uniformly accepted for clinical practice or recruitment into clinical trials. We determined the rate of and associated outcomes with alternate consenting strategies in consecutive acute ischemic stroke patients receiving emergent endovascular treatment. Methods We divided patients into those treated based on in-person consent and those based on alternate strategies. We identified clinical and procedural differences and differences in hospital outcomes: symptomatic ICH and favorable outcome (defined by modified Rankin Scale of 0–2 at discharge) based on consenting methodology. Results Of a total of 159 patients treated, 119 were treated based on in-person consent (by the patient in 27 and legally authorized representative in 92 procedures). Another 40 patients were treated using alternate strategies (20 telephone consents and 20 two physician consents based on medical necessity). There was no difference in the mean ages and proportion of men among the two groups based on consenting methodology. There was a significantly greater time interval incurred between CT scan and initiation of endovascular procedure in those in whom in-person consent was obtained (117 ± 65 min versus 101 ± 45 min, p = 0.01). There was no significant difference in rates of ICH (9% versus 8%, p = 0.9), or favorable outcome at discharge (28% versus 30%, p = 0.8). Conclusions Consent through alternate strategies does not adversely affect procedural characteristics or outcome of patients and may be more time efficient than in-person consenting process. PMID:25132906

  4. A seismic optimization procedure for reinforced concrete framed buildings based on eigenfrequency optimization

    NASA Astrophysics Data System (ADS)

    Arroyo, Orlando; Gutiérrez, Sergio

    2017-07-01

    Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.

  5. New evidences on efficacy of boronic acid-based derivatization method to identify sugars in plant material by gas chromatography-mass spectrometry.

    PubMed

    Faraco, Marianna; Fico, Daniela; Pennetta, Antonio; De Benedetto, Giuseppe E

    2016-10-01

    This work presents an analytical procedure based on gas chromatography-mass spectrometry which allows the determination of aldoses (glucose, mannose, galactose, arabinose, xylose, fucose, rhamnose) and chetoses (fructose) in plant material. One peak for each target carbohydrate was obtained by using an efficient derivatization employing methylboronic acid and acetic anhydride sequentially, whereas the baseline separation of the analytes was accomplished using an ionic liquid capillary column. First, the proposed method was optimized and validated. Successively, it was applied to identify the carbohydrates present in plant material. Finally, the procedure was successfully applied to samples from a XVII century painting, thus highlighting the occurrence of starch glue and fruit tree gum as polysaccharide materials. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Efficient airport detection using region-based fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  7. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  8. 40 CFR 63.749 - Compliance dates and determinations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... control system efficiency, Ek, as determined using the procedures specified in § 63.750(g) for control systems containing carbon adsorbers and in § 63.750(h) for control systems with other control devices, is... device is used, (A) The overall control system efficiency, Ek, as determined using the procedures...

  9. 40 CFR 63.749 - Compliance dates and determinations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... control system efficiency, Ek, as determined using the procedures specified in § 63.750(g) for control systems containing carbon adsorbers and in § 63.750(h) for control systems with other control devices, is... device is used, (A) The overall control system efficiency, Ek, as determined using the procedures...

  10. 40 CFR 63.749 - Compliance dates and determinations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... control system efficiency, Ek, as determined using the procedures specified in § 63.750(g) for control systems containing carbon adsorbers and in § 63.750(h) for control systems with other control devices, is... device is used, (A) The overall control system efficiency, Ek, as determined using the procedures...

  11. 40 CFR 63.749 - Compliance dates and determinations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... control system efficiency, Ek, as determined using the procedures specified in § 63.750(g) for control systems containing carbon adsorbers and in § 63.750(h) for control systems with other control devices, is... device is used, (A) The overall control system efficiency, Ek, as determined using the procedures...

  12. 40 CFR 63.749 - Compliance dates and determinations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... control system efficiency, Ek, as determined using the procedures specified in § 63.750(g) for control systems containing carbon adsorbers and in § 63.750(h) for control systems with other control devices, is... device is used, (A) The overall control system efficiency, Ek, as determined using the procedures...

  13. An Interactive Multiobjective Programming Approach to Combinatorial Data Analysis.

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2001-01-01

    Describes an interactive procedure for multiobjective asymmetric unidimensional seriation problems that uses a dynamic-programming algorithm to generate partially the efficient set of sequences for small to medium-sized problems and a multioperational heuristic to estimate the efficient set for larger problems. Applies the procedure to an…

  14. An Investigation of the Efficiency of Various Observational Procedures.

    ERIC Educational Resources Information Center

    Kissel, Mary Ann; Yeager, John L.

    Several sampling procedures for collecting observational data on student activities were studied in an effort to determine their relative efficiency. The setting was a fifth grade Individually Prescribed Instruction (IPI) mathematics class of thirty-three pupils. A criterion measure was obtained by cumulating the measurements obtained on the…

  15. Numerical Procedures for Inlet/Diffuser/Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.

    1998-01-01

    Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.

  16. Fluorometric Method for Determining the Efficiency of Spun-Glass Air Filtration Media

    PubMed Central

    Sullivan, James F.; Songer, Joseph R.; Mathis, Raymond G.

    1967-01-01

    The procedures and equipment needed to measure filtration efficiency by means of fluorescent aerosols are described. The filtration efficiency of individual lots of spun-glass air filtration medium or of entire air filtration systems employing such media was determined. Data relating to the comparative evaluation of spun-glass filter media by means of the fluorometric method described, as well as by conventional biological procedures, are presented. PMID:6031433

  17. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

    NASA Technical Reports Server (NTRS)

    Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

    1972-01-01

    This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

  18. Automation photometer of Hitachi U–2000 spectrophotometer with RS–232C–based computer

    PubMed Central

    Kumar, K. Senthil; Lakshmi, B. S.; Pennathur, Gautam

    1998-01-01

    The interfacing of a commonly used spectrophotometer, the Hitachi U2000, through its RS–232C port to a IBM compatible computer is described. The hardware for data acquisation was designed by suitably modifying readily available materials, and the software was written using the C programming language. The various steps involved in these procedures are elucidated in detail. The efficacy of the procedure was tested experimentally by running the visible spectrum of a cyanine dye. The spectrum was plotted through a printer hooked to the computer. The spectrum was also plotted by transforming the abscissa to the wavenumber scale. This was carried out by using another module written in C. The efficiency of the whole set-up has been calculated using standard procedures. PMID:18924834

  19. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  20. Development of a data management front end for use with a LANDSAT based information system. [assessing gypsy moth defoliation damage in Pennsylvania

    NASA Technical Reports Server (NTRS)

    Turner, B. J. (Principal Investigator)

    1982-01-01

    A user friendly front end was constructed to facilitate access to the LANDSAT mosaic data base supplied by JPL and to process both LANDSAT and ancillary data. Archieval and retrieval techniques were developed to efficiently handle this data base and make it compatible with requirements of the Pennsylvania Bureau of Forestry. Procedures are ready for: (1) forming the forest/nonforest mask in ORSER compressed map format using GSFC-supplied classification procedures; (2) registering data from a new scene (defoliated) to the mask (which may involve mosaicking if the area encompasses two LANDSAT scenes; (3) producing a masked new data set using the MASK program; (4) analyzing this data set to produce a map showing degrees of defoliation, output on the Versatec plotter; and (5) producing color composite maps by a diazo-type process.

  1. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  2. Determination of the performance of the Kaplan hydraulic turbines through simplified procedure

    NASA Astrophysics Data System (ADS)

    Pădureanu, I.; Jurcu, M.; Campian, C. V.; Haţiegan, C.

    2018-01-01

    A simplified procedure has been developed, compared to the complex one recommended by IEC 60041 (i.e. index samples), for measurement of the performance of the hydraulic turbines. The simplified procedure determines the minimum and maximum powers, the efficiency at maximum power, the evolution of powers by head and flow and to determine the correct relationship between runner/impeller blade angle and guide vane opening for most efficient operation of double-regulated machines. The simplified procedure can be used for a rapid and partial estimation of the performance of hydraulic turbines for repair and maintenance work.

  3. Required number of records for ASCE/SEI 7 ground-motion scaling procedure

    USGS Publications Warehouse

    Reyes, Juan C.; Kalkan, Erol

    2011-01-01

    The procedures and criteria in 2006 IBC (International Council of Building Officials, 2006) and 2007 CBC (International Council of Building Officials, 2007) for the selection and scaling ground-motions for use in nonlinear response history analysis (RHA) of structures are based on ASCE/SEI 7 provisions (ASCE, 2005, 2010). According to ASCE/SEI 7, earthquake records should be selected from events of magnitudes, fault distance, and source mechanisms that comply with the maximum considered earthquake, and then scaled so that the average value of the 5-percent-damped response spectra for the set of scaled records is not less than the design response spectrum over the period range from 0.2Tn to 1.5Tn sec (where Tn is the fundamental vibration period of the structure). If at least seven ground-motions are analyzed, the design values of engineering demand parameters (EDPs) are taken as the average of the EDPs determined from the analyses. If fewer than seven ground-motions are analyzed, the design values of EDPs are taken as the maximum values of the EDPs. ASCE/SEI 7 requires a minimum of three ground-motions. These limits on the number of records in the ASCE/SEI 7 procedure are based on engineering experience, rather than on a comprehensive evaluation. This study statistically examines the required number of records for the ASCE/SEI 7 procedure, such that the scaled records provide accurate, efficient, and consistent estimates of" true" structural responses. Based on elastic-perfectly-plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI 7 scaling procedure is applied to 480 sets of ground-motions. The number of records in these sets varies from three to ten. The records in each set were selected either (i) randomly, (ii) considering their spectral shapes, or (iii) considering their spectral shapes and design spectral-acceleration value, A(Tn). As compared to benchmark (that is, "true") responses from unscaled records using a larger catalog of ground-motions, it is demonstrated that the ASCE/SEI 7 scaling procedure is overly conservative if fewer than seven ground-motions are employed. Utilizing seven or more randomly selected records provides a more accurate estimate of the EDPs accompanied by reduced record-to-record variability of the responses. Consistency in accuracy and efficiency is achieved only if records are selected on the basis of their spectral shape and A(Tn).

  4. Effectiveness and Efficiency of Reading Error Correction Procedures on the Reading Accuracy, Reading Fluency, and Reading Comprehension of Fourth Grade Students

    ERIC Educational Resources Information Center

    Wallace, Jennifer N.

    2013-01-01

    As education law evolves, educators are faced with difficult decisions regarding curriculum, prevention programs, and intervention strategies to use with their students. The use of evidence-based strategies for all academic skill areas, including reading, has become increasingly common in schools. Twenty-four 4th grade students participated in an…

  5. CAT Procedures for Passage-Based Tests.

    ERIC Educational Resources Information Center

    Thompson, Tony D.; Davey, Tim

    Methods to control the test construct and the efficiency of a computerized adaptive test (CAT) were studied in the context of a reading comprehension test given as a part of a battery of tests for college admission. A goal of the study was to create test scores that were interchangeable with those from a fixed form paper and pencil test. The first…

  6. An integrated model-based software for FUS in moving abdominal organs.

    PubMed

    Schwenke, Michael; Strehlow, Jan; Haase, Sabrina; Jenne, Juergen; Tanner, Christine; Langø, Thomas; Loeve, Arjo J; Karakitsios, Ioannis; Xiao, Xu; Levy, Yoav; Sat, Giora; Bezzi, Mario; Braunewell, Stefan; Guenther, Matthias; Melzer, Andreas; Preusser, Tobias

    2015-05-01

    Focused ultrasound surgery (FUS) is a non-invasive method for tissue ablation that has the potential for complete and controlled local tumour destruction with minimal side effects. The treatment of abdominal organs such as the liver, however, requires particular technological support in order to enable a safe, efficient and effective treatment. As FUS is applied from outside the patient's body, suitable imaging methods, such as magnetic resonance imaging or diagnostic ultrasound, are needed to guide and track the procedure. To facilitate an efficient FUS procedure in the liver, the organ motion during breathing and the partial occlusion by the rib cage need to be taken into account in real time, demanding a continuous patient-specific adaptation of the treatment configuration. Modelling the patient's respiratory motion and combining this with tracking data improves the accuracy of motion predictions. Modelling and simulation of the FUS effects within the body allows the use of treatment planning and has the potential to be used within therapy to increase knowledge about the patient status. This article describes integrated model-based software for patient-specific modelling and prediction for FUS treatments of moving abdominal organs.

  7. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  8. Efficient preparation of shuffled DNA libraries through recombination (Gateway) cloning.

    PubMed

    Lehtonen, Soili I; Taskinen, Barbara; Ojala, Elina; Kukkurainen, Sampo; Rahikainen, Rolle; Riihimäki, Tiina A; Laitinen, Olli H; Kulomaa, Markku S; Hytönen, Vesa P

    2015-01-01

    Efficient and robust subcloning is essential for the construction of high-diversity DNA libraries in the field of directed evolution. We have developed a more efficient method for the subcloning of DNA-shuffled libraries by employing recombination cloning (Gateway). The Gateway cloning procedure was performed directly after the gene reassembly reaction, without additional purification and amplification steps, thus simplifying the conventional DNA shuffling protocols. Recombination-based cloning, directly from the heterologous reassembly reaction, conserved the high quality of the library and reduced the time required for the library construction. The described method is generally compatible for the construction of DNA-shuffled gene libraries. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. High-efficiency indium tin oxide/indium phosphide solar cells

    NASA Technical Reports Server (NTRS)

    Li, X.; Wanlass, M. W.; Gessert, T. A.; Emery, K. A.; Coutts, T. J.

    1989-01-01

    Improvements in the performance of indium tin oxide (ITO)/indium phosphide solar cells have been realized by the dc magnetron sputter deposition of n-ITO onto an epitaxial p/p(+) structure grown on commercial p(+) bulk substrates. The highest efficiency cells were achieved when the surface of the epilayer was exposed to an Ar/H2 plasma before depositing the bulk of the ITO in a more typical Ar/O2 plasma. With H2 processing, global efficiencies of 18.9 percent were achieved. It is suggested that the excellent performance of these solar cells results from the optimization of the doping, thickness, transport, and surface properties of the p-type base, as well as from better control over the ITO deposition procedure.

  10. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  11. Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger

    NASA Astrophysics Data System (ADS)

    Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun

    2011-04-01

    This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.

  12. High-throughput single-molecule force spectroscopy for membrane proteins

    NASA Astrophysics Data System (ADS)

    Bosshart, Patrick D.; Casagrande, Fabio; Frederix, Patrick L. T. M.; Ratera, Merce; Bippes, Christian A.; Müller, Daniel J.; Palacin, Manuel; Engel, Andreas; Fotiadis, Dimitrios

    2008-09-01

    Atomic force microscopy-based single-molecule force spectroscopy (SMFS) is a powerful tool for studying the mechanical properties, intermolecular and intramolecular interactions, unfolding pathways, and energy landscapes of membrane proteins. One limiting factor for the large-scale applicability of SMFS on membrane proteins is its low efficiency in data acquisition. We have developed a semi-automated high-throughput SMFS (HT-SMFS) procedure for efficient data acquisition. In addition, we present a coarse filter to efficiently extract protein unfolding events from large data sets. The HT-SMFS procedure and the coarse filter were validated using the proton pump bacteriorhodopsin (BR) from Halobacterium salinarum and the L-arginine/agmatine antiporter AdiC from the bacterium Escherichia coli. To screen for molecular interactions between AdiC and its substrates, we recorded data sets in the absence and in the presence of L-arginine, D-arginine, and agmatine. Altogether ~400 000 force-distance curves were recorded. Application of coarse filtering to this wealth of data yielded six data sets with ~200 (AdiC) and ~400 (BR) force-distance spectra in each. Importantly, the raw data for most of these data sets were acquired in one to two days, opening new perspectives for HT-SMFS applications.

  13. A novel procedure on next generation sequencing data analysis using text mining algorithm.

    PubMed

    Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen

    2016-05-13

    Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.

  14. Development and Applications of a Stage Stacking Procedure

    NASA Technical Reports Server (NTRS)

    Kulkarni, Sameer; Celestina, Mark L.; Adamczyk, John J.

    2012-01-01

    The preliminary design of multistage axial compressors in gas turbine engines is typically accomplished with mean-line methods. These methods, which rely on empirical correlations, estimate compressor performance well near the design point, but may become less reliable off-design. For land-based applications of gas turbine engines, off-design performance estimates are becoming increasingly important, as turbine plant operators desire peaking or load-following capabilities and hot-day operability. The current work develops a one-dimensional stage stacking procedure, including a newly defined blockage term, which is used to estimate the off-design performance and operability range of a 13-stage axial compressor used in a power generating gas turbine engine. The new blockage term is defined to give mathematical closure on static pressure, and values of blockage are shown to collapse to curves as a function of stage inlet flow coefficient and corrected shaft speed. In addition to these blockage curves, the stage stacking procedure utilizes stage characteristics of ideal work coefficient and adiabatic efficiency. These curves are constructed using flow information extracted from computational fluid dynamics (CFD) simulations of groups of stages within the compressor. Performance estimates resulting from the stage stacking procedure are shown to match the results of CFD simulations of the entire compressor to within 1.6% in overall total pressure ratio and within 0.3 points in overall adiabatic efficiency. Utility of the stage stacking procedure is demonstrated by estimation of the minimum corrected speed which allows stable operation of the compressor. Further utility of the stage stacking procedure is demonstrated with a bleed sensitivity study, which estimates a bleed schedule to expand the compressors operating range.

  15. Determination of Thermoelectric Module Efficiency A Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; McCarty, Robin; Salvador, James R.

    2014-01-01

    The development of thermoelectrics (TE) for energy conversion is in the transition phase from laboratory research to device development. There is an increasing demand to accurately determine the module efficiency, especially for the power generation mode. For many thermoelectrics, the figure of merit, ZT, of the material sometimes cannot be fully realized at the device level. Reliable efficiency testing of thermoelectric modules is important to assess the device ZT and provide the end-users with realistic values on how much power can be generated under specific conditions. We conducted a general survey of efficiency testing devices and their performance. The resultsmore » indicated the lack of industry standards and test procedures. This study included a commercial test system and several laboratory systems. Most systems are based on the heat flow meter method and some are based on the Harman method. They are usually reproducible in evaluating thermoelectric modules. However, cross-checking among different systems often showed large errors that are likely caused by unaccounted heat loss and thermal resistance. Efficiency testing is an important area for the thermoelectric community to focus on. A follow-up international standardization effort is planned.« less

  16. Efficient electroporation of DNA and protein into confluent and differentiated epithelial cells in culture.

    PubMed

    Deora, Ami A; Diaz, Fernando; Schreiner, Ryan; Rodriguez-Boulan, Enrique

    2007-10-01

    Electroporation-mediated delivery of molecules is a procedure widely used for transfecting complementary DNA in bacteria, mammalian and plant cells. This technique has proven very efficient for the introduction of macromolecules into cells in suspension culture and even into cells in their native tissue environment, e.g. retina and embryonic tissues. However, in spite of several attempts to date, there are no well-established procedures to electroporate polarized epithelial cells adhering to a tissue culture substrate (glass, plastic or filter). We report here the development of a simple procedure that uses available commercial equipment and works efficiently and reproducibly for a variety of epithelial cell lines in culture.

  17. Genome-wide RNAi screening identifies host restriction factors critical for in vivo AAV transduction

    PubMed Central

    Mano, Miguel; Ippodrino, Rudy; Zentilin, Lorena; Zacchigna, Serena; Giacca, Mauro

    2015-01-01

    Viral vectors based on the adeno-associated virus (AAV) hold great promise for in vivo gene transfer; several unknowns, however, still limit the vectors’ broader and more efficient application. Here, we report the results of a high-throughput, whole-genome siRNA screening aimed at identifying cellular factors regulating AAV transduction. We identified 1,483 genes affecting vector efficiency more than 4-fold and up to 50-fold, either negatively or positively. Most of these factors have not previously been associated to AAV infection. The most effective siRNAs were independent from the virus serotype or analyzed cell type and were equally evident for single-stranded and self-complementary AAV vectors. A common characteristic of the most effective siRNAs was the induction of cellular DNA damage and activation of a cell cycle checkpoint. This information can be exploited for the development of more efficient AAV-based gene delivery procedures. Administration of the most effective siRNAs identified by the screening to the liver significantly improved in vivo AAV transduction efficiency. PMID:26305933

  18. Innovative use of global navigation satellite systems for flight inspection

    NASA Astrophysics Data System (ADS)

    Kim, Eui-Ho

    The International Civil Aviation Organization (ICAO) mandates flight inspection in every country to provide safety during flight operations. Among many criteria of flight inspection, airborne inspection of Instrument Landing Systems (ILS) is very important because the ILS is the primary landing guidance system worldwide. During flight inspection of the ILS, accuracy in ILS landing guidance is checked by using a Flight Inspection System (FIS). Therefore, a flight inspection system must have high accuracy in its positioning capability to detect any deviation so that accurate guidance of the ILS can be maintained. Currently, there are two Automated Flight Inspection Systems (AFIS). One is called Inertial-based AFIS, and the other one is called Differential GPS-based (DGPS-based) AFIS. The Inertial-based AFIS enables efficient flight inspection procedures, but its drawback is high cost because it requires a navigation-grade Inertial Navigation System (INS). On the other hand, the DGPS-based AFIS has relatively low cost, but flight inspection procedures require landing and setting up a reference receiver. Most countries use either one of the systems based on their own preferences. There are around 1200 ILS in the U.S., and each ILS must be inspected every 6 to 9 months. Therefore, it is important to manage the airborne inspection of the ILS in a very efficient manner. For this reason, the Federal Aviation Administration (FAA) mainly uses the Inertial-based AFIS, which has better efficiency than the DGPS-based AFIS in spite of its high cost. Obviously, the FAA spends tremendous resources on flight inspection. This thesis investigates the value of GPS and the FAA's augmentation to GPS for civil aviation called the Wide Area Augmentation System (or WAAS) for flight inspection. Because standard GPS or WAAS position outputs cannot meet the required accuracy for flight inspection, in this thesis, various algorithms are developed to improve the positioning ability of Flight Inspection Systems (FIS) by using GPS and WAAS in novel manners. The algorithms include Adaptive Carrier Smoothing (ACS), optimizing WAAS accuracy and stability, and reference point-based precise relative positioning for real-time and near-real-time applications. The developed systems are WAAS-aided FIS, WAAS-based FIS, and stand-alone GPS-based FIS. These systems offer both high efficiency and low cost, and they have different advantages over one another in terms of accuracy, integrity, and worldwide availability. The performance of each system is tested with experimental flight test data and shown to have accuracy that is sufficient for flight inspection and superior to the current Inertial-based AFIS.

  19. Electric and hybrid vehicles charge efficiency tests of ESB EV-106 lead acid batteries

    NASA Technical Reports Server (NTRS)

    Rowlette, J. J.

    1981-01-01

    Charge efficiencies were determined by measurements made under widely differing conditions of temperature, charge procedure, and battery age. The measurements were used to optimize charge procedures and to evaluate the concept of a modified, coulometric state of charge indicator. Charge efficiency determinations were made by measuring gassing rates and oxygen fractions. A novel, positive displacement gas flow meter which proved to be both simple and highly accurate is described and illustrated.

  20. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    PubMed

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  1. Task analysis method for procedural training curriculum development.

    PubMed

    Riggle, Jakeb D; Wadman, Michael C; McCrory, Bernadette; Lowndes, Bethany R; Heald, Elizabeth A; Carstens, Patricia K; Hallbeck, M Susan

    2014-06-01

    A central venous catheter (CVC) is an important medical tool used in critical care and emergent situations. Integral to proper care in many circumstances, insertion of a CVC introduces the risk of central line-associated blood stream infections and mechanical adverse events; proper training is important for safe CVC insertion. Cognitive task analysis (CTA) methods have been successfully implemented in the medical field to improve the training of postgraduate medical trainees, but can be very time-consuming to complete and require a significant time commitment from many subject matter experts (SMEs). Many medical procedures such as CVC insertion are linear processes with well-documented procedural steps. These linear procedures may not require a traditional CTA to gather the information necessary to create a training curriculum. Accordingly, a novel, streamlined CTA method designed primarily to collect cognitive cues for linear procedures was developed to be used by medical professionals with minimal CTA training. This new CTA methodology required fewer trained personnel, fewer interview sessions, and less time commitment from SMEs than a traditional CTA. Based on this study, a streamlined CTA methodology can be used to efficiently gather cognitive information on linear medical procedures for the creation of resident training curricula and procedural skills assessments.

  2. Validated measurements of microbial loads on environmental surfaces in intensive care units before and after disinfecting cleaning.

    PubMed

    Frickmann, H; Bachert, S; Warnke, P; Podbielski, A

    2018-03-01

    Preanalytic aspects can make results of hygiene studies difficult to compare. Efficacy of surface disinfection was assessed with an evaluated swabbing procedure. A validated microbial screening of surfaces was performed in the patients' environment and from hands of healthcare workers on two intensive care units (ICUs) prior to and after a standardized disinfection procedure. From a pure culture, the recovery rate of the swabs for Staphylococcus aureus was 35%-64% and dropped to 0%-22% from a mixed culture with 10-times more Staphylococcus epidermidis than S. aureus. Microbial surface loads 30 min before and after the cleaning procedures were indistinguishable. The quality-ensured screening procedure proved that adequate hygiene procedures are associated with a low overall colonization of surfaces and skin of healthcare workers. Unchanged microbial loads before and after surface disinfection demonstrated the low additional impact of this procedure in the endemic situation when the pathogen load prior to surface disinfection is already low. Based on a validated screening system ensuring the interpretability and reliability of the results, the study confirms the efficiency of combined hand and surface hygiene procedures to guarantee low rates of bacterial colonization. © 2017 The Society for Applied Microbiology.

  3. Post-procedural Care in Interventional Radiology: What Every Interventional Radiologist Should Know—Part I: Standard Post-procedural Instructions and Follow-Up Care

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taslakian, Bedros, E-mail: Bedros.Taslakian@nyumc.org; Sridhar, Divya

    Interventional radiology (IR) has evolved into a full-fledged clinical specialty with attendant patient care responsibilities. Success in IR now requires development of a full clinical practice, including consultations, inpatient admitting privileges, and an outpatient clinic. In addition to technical excellence and innovation, maintaining a comprehensive practice is imperative for interventional radiologists to compete successfully for patients and referral bases. A structured approach to periprocedural care, including routine follow-up and early identification and management of complications, facilitates efficient and thorough management with an emphasis on quality and patient safety.

  4. LabVIEW-based control and data acquisition system for cathodoluminescence experiments.

    PubMed

    Bok, J; Schauer, P

    2011-11-01

    Computer automation of cathodoluminescence (CL) experiments using equipment developed in our laboratory is described. The equipment provides various experiments for CL efficiency, CL spectra, and CL time response studies. The automation was realized utilizing the graphical programming environment LabVIEW. The developed application software with procedures for equipment control and data acquisition during various CL experiments is presented. As the measured CL data are distorted by technical limitations of the equipment, such as equipment spectral sensitivity and time response, data correction algorithms were incorporated into the procedures. Some examples of measured data corrections are presented. © 2011 American Institute of Physics

  5. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  6. Post-procedural Care in Interventional Radiology: What Every Interventional Radiologist Should Know-Part I: Standard Post-procedural Instructions and Follow-Up Care.

    PubMed

    Taslakian, Bedros; Sridhar, Divya

    2017-04-01

    Interventional radiology (IR) has evolved into a full-fledged clinical specialty with attendant patient care responsibilities. Success in IR now requires development of a full clinical practice, including consultations, inpatient admitting privileges, and an outpatient clinic. In addition to technical excellence and innovation, maintaining a comprehensive practice is imperative for interventional radiologists to compete successfully for patients and referral bases. A structured approach to periprocedural care, including routine follow-up and early identification and management of complications, facilitates efficient and thorough management with an emphasis on quality and patient safety.

  7. Efficiency Analysis: Enhancing the Statistical and Evaluative Power of the Regression-Discontinuity Design.

    ERIC Educational Resources Information Center

    Madhere, Serge

    An analytic procedure, efficiency analysis, is proposed for improving the utility of quantitative program evaluation for decision making. The three features of the procedure are explained: (1) for statistical control, it adopts and extends the regression-discontinuity design; (2) for statistical inferences, it de-emphasizes hypothesis testing in…

  8. 75 FR 34734 - Improving Market and Planning Efficiency Through Improved Software; Notice of Agenda and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-18

    ... Market and Planning Efficiency Through Improved Software; Notice of Agenda and Procedures for Staff Technical Conference June 10, 2010. This notice establishes the agenda and procedures for the staff[email protected] . Kimberly D. Bose, Secretary. Agenda for AD10-12 Staff Technical Conference on Enhanced Power...

  9. Local flaps: a real-time finite element based solution to the plastic surgery defect puzzle.

    PubMed

    Sifakis, Eftychios; Hellrung, Jeffrey; Teran, Joseph; Oliker, Aaron; Cutting, Court

    2009-01-01

    One of the most fundamental challenges in plastic surgery is the alteration of the geometry and topology of the skin. The specific decisions made by the surgeon concerning the size and shape of the tissue to be removed and the subsequent closure of the resulting wound may have a dramatic affect on the quality of life for the patient after the procedure is completed. The plastic surgeon must look at the defect created as an organic puzzle, designing the optimal pattern to close the hole aesthetically and efficiently. In the past, such skills were the distillation of years of hands-on practice on live patients, while relevant reference material was limited to two-dimensional illustrations. Practicing this procedure on a personal computer [1] has been largely impractical to date, but recent technological advances may come to challenge this limitation. We present a comprehensive real-time virtual surgical environment, based on finite element modeling and simulation of tissue cutting and manipulation. Our system demonstrates the fundamental building blocks of plastic surgery procedures on a localized tissue flap, and provides a proof of concept for larger simulation systems usable in the authoring of complex procedures on elaborate subject geometry.

  10. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  11. Combination of magnetic dispersive micro solid-phase extraction and supramolecular solvent-based microextraction followed by high-performance liquid chromatography for determination of trace amounts of cholesterol-lowering drugs in complicated matrices.

    PubMed

    Arghavani-Beydokhti, Somayeh; Rajabi, Maryam; Asghari, Alireza

    2017-07-01

    A novel, efficient, rapid, simple, sensitive, selective, and environmentally friendly method termed magnetic dispersive micro solid-phase extraction combined with supramolecular solvent-based microextraction (Mdμ-SPE-SSME) followed by high-performance liquid chromatography (HPLC) with UV detection is introduced for the simultaneous microextraction of cholesterol-lowering drugs in complicated matrices. In the first microextraction procedure, using layered double hydroxide (LDH)-coated Fe 3 O 4 magnetic nanoparticles, an efficient sample cleanup is simply and rapidly provided without the need for time-consuming centrifugation and elution steps. In the first step, desorption of the target analytes is easily performed through dissolution of the LDH-coated magnetic nanoparticles containing the target analytes in an acidic solution. In the next step, an emulsification microextraction method based on a supramolecular solvent is used for excellent preconcentration, ultimately resulting in an appropriate determination of the target analytes in real samples. Under the optimal experimental conditions, the Mdμ-SPE-SSME-HPLC-UV detection procedure provides good linearity in the ranges of 1.0-1500 ng mL -1 , 1.5-2000 ng mL -1 , and 2.0-2000 ng mL -1 with coefficients of determination of 0.995 or less, low limits of detection (0.3, 0.5, and 0.5 ng mL -1 ), and good extraction repeatabilities (relative standard deviations below 7.8%, n = 5) in deionized water for rosuvastatin, atorvastatin, and gemfibrozil, respectively. Finally, the proposed method is successfully applied for the determination of the target analytes in complicated matrices. Graphical Abstract Mdμ-SPE-SSME procedure.

  12. Measurement of filter paper activities of cellulase with microplate-based assay.

    PubMed

    Yu, Xiaoxiao; Liu, Yan; Cui, Yuxiao; Cheng, Qiyue; Zhang, Zaixiao; Lu, Jia Hui; Meng, Qingfan; Teng, Lirong; Ren, Xiaodong

    2016-01-01

    It is always a challenge to determine the total cellulase activity efficiently without reducing accuracy. The most common total cellulase activity assay is the filter paper assay (FPA) established by the International Union of Pure and Applied Chemistry (IUPAC). A new procedure to measure the FPA with microplate-based assay was studied in this work, which followed the main idea of IUPAC to dilute cellulase preparation to get fixed glucose release. FPAs of six cellulase preparations were determined with the microplate-based assay. It is shown that FPAs of cellulase Youtell, RCconc, R-10, Lerkam, Yishui and Sinopharm were 67.9, 46.0, 46.1, 27.4, 7.6 and 8.0 IU/ml respectively. There was no significant difference at the 95% confidence level between the FPA determined with IUPAC and the microplate-based assay. It could be concluded that the FPA could be determined by the microplate-based assay with the same accuracy and much more efficiency compared with that by IUPAC.

  13. Measurement of filter paper activities of cellulase with microplate-based assay

    PubMed Central

    Yu, Xiaoxiao; Liu, Yan; Cui, Yuxiao; Cheng, Qiyue; Zhang, Zaixiao; Lu, Jia Hui; Meng, Qingfan; Teng, Lirong; Ren, Xiaodong

    2015-01-01

    It is always a challenge to determine the total cellulase activity efficiently without reducing accuracy. The most common total cellulase activity assay is the filter paper assay (FPA) established by the International Union of Pure and Applied Chemistry (IUPAC). A new procedure to measure the FPA with microplate-based assay was studied in this work, which followed the main idea of IUPAC to dilute cellulase preparation to get fixed glucose release. FPAs of six cellulase preparations were determined with the microplate-based assay. It is shown that FPAs of cellulase Youtell, RCconc, R-10, Lerkam, Yishui and Sinopharm were 67.9, 46.0, 46.1, 27.4, 7.6 and 8.0 IU/ml respectively. There was no significant difference at the 95% confidence level between the FPA determined with IUPAC and the microplate-based assay. It could be concluded that the FPA could be determined by the microplate-based assay with the same accuracy and much more efficiency compared with that by IUPAC. PMID:26858572

  14. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  15. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  16. Ontology-based vector space model and fuzzy query expansion to retrieve knowledge on medical computational problem solutions.

    PubMed

    Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos

    2007-01-01

    Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.

  17. Road landslide information management and forecasting system base on GIS.

    PubMed

    Wang, Wei Dong; Du, Xiang Gang; Xie, Cui Ming

    2009-09-01

    Take account of the characters of road geological hazard and its supervision, it is very important to develop the Road Landslides Information Management and Forecasting System based on Geographic Information System (GIS). The paper presents the system objective, function, component modules and key techniques in the procedure of system development. The system, based on the spatial information and attribute information of road geological hazard, was developed and applied in Guizhou, a province of China where there are numerous and typical landslides. The manager of communication, using the system, can visually inquire all road landslides information based on regional road network or on the monitoring network of individual landslide. Furthermore, the system, integrated with mathematical prediction models and the GIS's strongpoint on spatial analyzing, can assess and predict landslide developing procedure according to the field monitoring data. Thus, it can efficiently assists the road construction or management units in making decision to control the landslides and to reduce human vulnerability.

  18. Improving Operating Room Efficiency via Reduction and Standardization of Video-Assisted Thoracoscopic Surgery Instrumentation.

    PubMed

    Friend, Tynan H; Paula, Ashley; Klemm, Jason; Rosa, Mark; Levine, Wilton

    2018-05-28

    Being the economic powerhouses of most large medical centers, operating rooms (ORs) require the highest levels of teamwork, communication, and efficiency in order to optimize patient safety and reduce hospital waste. A major component of OR waste comes from unused surgical instrumentation; instruments that are frequently prepared for procedures but are never touched by the surgical team still require a full reprocessing cycle at the conclusion of the case. Based on our own previous successes in the perioperative domain, in this work we detail an initiative that reduces surgical instrumentation waste of video-assisted thoracoscopic surgery (VATS) procedures by placing thoracotomy conversion instrumentation in a standby location and designing a specific instrument kit to be used solely for VATS cases. Our estimates suggest that this initiative will reduce at least 91,800 pounds of unnecessary surgical instrumentation from cycling through our ORs and reprocessing department annually, resulting in increased OR team communication without sacrificing the highest standard of patient safety.

  19. Toward a better guard of coastal water safety-Microbial distribution in coastal water and their facile detection.

    PubMed

    Xie, Yunxuan; Qiu, Ning; Wang, Guangyi

    2017-05-15

    Prosperous development in marine-based tourism has raised increasing concerns over the sanitary quality of coastal waters with potential microbial contamination. The World Health Organization has set stringent standards over a list of pathogenic microorganisms posing potential threats to people with frequent coastal water exposure and has asked for efficient detection procedures for pathogen facile identification. Inspection of survey events regarding the occurrence of marine pathogens in recreational beaches in recent years has reinforced the need for the development of a rapid identification procedure. In this review, we examine the possibility of recruiting uniform molecular assays to identify different marine pathogens and the feasibility of appropriate biomarkers, including enterochelin biosynthetic genes, for general toxicity assays. The focus is not only on bacterial pathogens but also on other groups of infectious pathogens. The ultimate goal is the development of a handy method to more efficiently and rapidly detect marine pathogens. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Acoustic scattering by arbitrary distributions of disjoint, homogeneous cylinders or spheres.

    PubMed

    Hesford, Andrew J; Astheimer, Jeffrey P; Waag, Robert C

    2010-05-01

    A T-matrix formulation is presented to compute acoustic scattering from arbitrary, disjoint distributions of cylinders or spheres, each with arbitrary, uniform acoustic properties. The generalized approach exploits the similarities in these scattering problems to present a single system of equations that is easily specialized to cylindrical or spherical scatterers. By employing field expansions based on orthogonal harmonic functions, continuity of pressure and normal particle velocity are directly enforced at each scatterer using diagonal, analytic expressions to eliminate the need for integral equations. The effect of a cylinder or sphere that encloses all other scatterers is simulated with an outer iterative procedure that decouples the inner-object solution from the effect of the enclosing object to improve computational efficiency when interactions among the interior objects are significant. Numerical results establish the validity and efficiency of the outer iteration procedure for nested objects. Two- and three-dimensional methods that employ this outer iteration are used to measure and characterize the accuracy of two-dimensional approximations to three-dimensional scattering of elevation-focused beams.

  1. FAST SIMULATION OF SOLID TUMORS THERMAL ABLATION TREATMENTS WITH A 3D REACTION DIFFUSION MODEL *

    PubMed Central

    BERTACCINI, DANIELE; CALVETTI, DANIELA

    2007-01-01

    An efficient computational method for near real-time simulation of thermal ablation of tumors via radio frequencies is proposed. Model simulations of the temperature field in a 3D portion of tissue containing the tumoral mass for different patterns of source heating can be used to design the ablation procedure. The availability of a very efficient computational scheme makes it possible update the predicted outcome of the procedure in real time. In the algorithms proposed here a discretization in space of the governing equations is followed by an adaptive time integration based on implicit multistep formulas. A modification of the ode15s MATLAB function which uses Krylov space iterative methods for the solution of for the linear systems arising at each integration step makes it possible to perform the simulations on standard desktop for much finer grids than using the built-in ode15s. The proposed algorithm can be applied to a wide class of nonlinear parabolic differential equations. PMID:17173888

  2. Successful generation of structural information for fragment-based drug discovery.

    PubMed

    Öster, Linda; Tapani, Sofia; Xue, Yafeng; Käck, Helena

    2015-09-01

    Fragment-based drug discovery relies upon structural information for efficient compound progression, yet it is often challenging to generate structures with bound fragments. A summary of recent literature reveals that a wide repertoire of experimental procedures is employed to generate ligand-bound crystal structures successfully. We share in-house experience from setting up and executing fragment crystallography in a project that resulted in 55 complex structures. The ligands span five orders of magnitude in affinity and the resulting structures are made available to be of use, for example, for development of computational methods. Analysis of the results revealed that ligand properties such as potency, ligand efficiency (LE) and, to some degree, clogP influence the success of complex structure generation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Oceanic Flights and Airspace: Improving Efficiency by Trajectory-Based Operations

    NASA Technical Reports Server (NTRS)

    Fernandes, Alicia Borgman; Rebollo, Juan; Koch, Michael

    2016-01-01

    Oceanic operations suffer from multiple inefficiencies, including pre-departure planning that does not adequately consider uncertainty in the proposed trajectory, restrictions on the routes that a flight operator can choose for an oceanic crossing, time-consuming processes and procedures for amending en route trajectories, and difficulties exchanging data between Flight Information Regions (FIRs). These inefficiencies cause aircraft to fly suboptimal trajectories, burning fuel and time that could be conserved. A concept to support integration of existing and emerging capabilities and concepts is needed to transition to an airspace system that employs Trajectory Based Operations (TBO) to improve efficiency and safety in oceanic operations. This paper describes such a concept and the results of preliminary activities to evaluate the concept, including a stakeholder feedback activity, user needs analysis, and high level benefits analysis.

  4. Cell optoporation with a sub-15 fs and a 250-fs laser

    NASA Astrophysics Data System (ADS)

    Breunig, Hans Georg; Batista, Ana; Uchugonova, Aisada; König, Karsten

    2016-06-01

    We employed two commercially available femtosecond lasers, a Ti:sapphire and a ytterbium-based oscillator, to directly compare from a user's practical point-of-view in one common experimental setup the efficiencies of transient laser-induced cell membrane permeabilization, i.e., of so-called optoporation. The experimental setup consisted of a modified multiphoton laser-scanning microscope employing high-NA focusing optics. An automatic cell irradiation procedure was realized with custom-made software that identified cell positions and controlled relevant hardware components. The Ti:sapphire and ytterbium-based oscillators generated broadband sub-15-fs pulses around 800 nm and 250-fs pulses at 1044 nm, respectively. A higher optoporation rate and posttreatment viability were observed for the shorter fs pulses, confirming the importance of multiphoton effects for efficient optoporation.

  5. Efficiency and Safety of One-Step Procedure Combined Laparoscopic Cholecystectomy and Eretrograde Cholangiopancreatography for Treatment of Cholecysto-Choledocholithiasis: A Randomized Controlled Trial.

    PubMed

    Liu, Zhiyi; Zhang, Luyao; Liu, Yanling; Gu, Yang; Sun, Tieliang

    2017-11-01

    We aimed to evaluate the efficiency and safety of one-step procedure combined endoscopic retrograde cholangiopancreatography (ERCP) and laparoscopic cholecystectomy (LC) for treatment of patients with cholecysto-choledocholithiasis. A prospective randomized study was performed on 63 consecutive cholecysto-choledocholithiasis patients during 2008 and 2011. The efficiency and safety of one-step procedure was assessed by comparing the two-step LC with ERCP + endoscopic sphincterotomy (EST). Outcomes including intraoperative features, postoperative features (length of stay and postoperative complications) were evaluated. One- or two-step procedure of LC with ERCP + EST was successfully performed in all patients, and common bile duct stones were completely removed. Statistical analyses showed that length of stay and pulmonary infection rate were significantly lower in the test group compared with that in the control group (P < 0.05), whereas no statistical difference in other outcomes was found between the two groups (all P > 0.05). The one-step procedure of LC with ERCP + EST is superior to the two-step procedure for treatment of patients with cholecysto-choledocholithiasis regarding to the reduced hospital stay and inhibited occurrence of pulmonary infections. Compared with two-step procedure, one-step procedure of LC with ERCP + EST may be a superior option for cholecysto-choledocholithiasis patients treatment regarding to hospital stay and pulmonary infections.

  6. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    PubMed

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  7. Design and Field Procedures in the US National Comorbidity Survey Replication Adolescent Supplement (NCS-A)

    PubMed Central

    Kessler, Ronald C.; Avenevoli, Shelli; Costello, E. Jane; Green, Jennifer Greif; Gruber, Michael J.; Heeringa, Steven; Merikangas, Kathleen R.; Pennell, Beth-Ellen; Sampson, Nancy A.; Zaslavsky, Alan M.

    2009-01-01

    An overview is presented of the design and field procedures of the US National Comorbidity Survey Replication Adolescent Supplement (NCS-A), a US face-to-face household survey of the prevalence and correlates of DSM-IV mental disorders. The survey was based on a dual-frame design that included 904 adolescent residents of the households that participated in the US National Comorbidity Survey Replication (85.9% response rate) and 9,244 adolescent students selected from a nationally representative sample of 320 schools (74.7% response rate). After expositing the logic of dual-frame designs, comparisons are presented of sample and population distributions on Census socio-demographic variables and, in the school sample, school characteristics. These document only minor differences between the samples and the population. The results of statistical analysis of the bias-efficiency trade-off in weight trimming are then presented. These show that modest trimming meaningfully reduces mean squared error. Analysis of comparative sample efficiency shows that the household sample is more efficient than the school sample, leading to the household sample getting a higher weight relative to its size in the consolidated sample relative to the school sample. Taken together, these results show that the NCS-A is an efficient sample of the target population with good representativeness on a range of socio-demographic and geographic variables. PMID:19507169

  8. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  9. Effect of various pretreatment methods on anaerobic mixed microflora to enhance biohydrogen production utilizing dairy wastewater as substrate.

    PubMed

    Venkata Mohan, S; Lalit Babu, V; Sarma, P N

    2008-01-01

    Influence of different pretreatment methods applied on anaerobic mixed inoculum was evaluated for selectively enriching the hydrogen (H(2)) producing mixed culture using dairy wastewater as substrate. The experimental data showed the feasibility of molecular biohydrogen generation utilizing dairy wastewater as primary carbon source through metabolic participation. However, the efficiency of H(2) evolution and substrate removal efficiency were found to be dependent on the type of pretreatment procedure adopted on the parent inoculum. Among the studied pretreatment methods, chemical pretreatment (2-bromoethane sulphonic acid sodium salt (0.2 g/l); 24 h) procedure enabled higher H(2) yield along with concurrent substrate removal efficiency. On the contrary, heat-shock pretreatment (100 degrees C; 1 h) procedure resulted in relatively low H(2) yield. Compared to control experiments all the adopted pretreatment methods documented higher H(2) generation efficiency. In the case of combination experiments, integration of pH (pH 3; adjusted with ortho-phosphoric acid; 24 h) and chemical pretreatment evidenced higher H(2) production. Data envelopment analysis (DEA), a frontier analysis technique model was successfully applied to enumerate the relative efficiency of different pretreatment methods studied by considered pretreatment procedures as input and cumulative H(2) production rate and substrate degradation rate as corresponding two outputs.

  10. An Efficient Multistrategy DNA Decontamination Procedure of PCR Reagents for Hypersensitive PCR Applications

    PubMed Central

    Pruvost, Mélanie; Bennett, E. Andrew; Grange, Thierry; Geigl, Eva-Maria

    2010-01-01

    Background PCR amplification of minute quantities of degraded DNA for ancient DNA research, forensic analyses, wildlife studies and ultrasensitive diagnostics is often hampered by contamination problems. The extent of these problems is inversely related to DNA concentration and target fragment size and concern (i) sample contamination, (ii) laboratory surface contamination, (iii) carry-over contamination, and (iv) contamination of reagents. Methodology/Principal Findings Here we performed a quantitative evaluation of current decontamination methods for these last three sources of contamination, and developed a new procedure to eliminate contaminating DNA contained in PCR reagents. We observed that most current decontamination methods are either not efficient enough to degrade short contaminating DNA molecules, rendered inefficient by the reagents themselves, or interfere with the PCR when used at doses high enough to eliminate these molecules. We also show that efficient reagent decontamination can be achieved by using a combination of treatments adapted to different reagent categories. Our procedure involves γ- and UV-irradiation and treatment with a mutant recombinant heat-labile double-strand specific DNase from the Antarctic shrimp Pandalus borealis. Optimal performance of these treatments is achieved in narrow experimental conditions that have been precisely analyzed and defined herein. Conclusions/Significance There is not a single decontamination method valid for all possible contamination sources occurring in PCR reagents and in the molecular biology laboratory and most common decontamination methods are not efficient enough to decontaminate short DNA fragments of low concentration. We developed a versatile multistrategy decontamination procedure for PCR reagents. We demonstrate that this procedure allows efficient reagent decontamination while preserving the efficiency of PCR amplification of minute quantities of DNA. PMID:20927390

  11. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  12. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  13. Planning of electroporation-based treatments using Web-based treatment-planning software.

    PubMed

    Pavliha, Denis; Kos, Bor; Marčan, Marija; Zupanič, Anže; Serša, Gregor; Miklavčič, Damijan

    2013-11-01

    Electroporation-based treatment combining high-voltage electric pulses and poorly permanent cytotoxic drugs, i.e., electrochemotherapy (ECT), is currently used for treating superficial tumor nodules by following standard operating procedures. Besides ECT, another electroporation-based treatment, nonthermal irreversible electroporation (N-TIRE), is also efficient at ablating deep-seated tumors. To perform ECT or N-TIRE of deep-seated tumors, following standard operating procedures is not sufficient and patient-specific treatment planning is required for successful treatment. Treatment planning is required because of the use of individual long-needle electrodes and the diverse shape, size and location of deep-seated tumors. Many institutions that already perform ECT of superficial metastases could benefit from treatment-planning software that would enable the preparation of patient-specific treatment plans. To this end, we have developed a Web-based treatment-planning software for planning electroporation-based treatments that does not require prior engineering knowledge from the user (e.g., the clinician). The software includes algorithms for automatic tissue segmentation and, after segmentation, generation of a 3D model of the tissue. The procedure allows the user to define how the electrodes will be inserted. Finally, electric field distribution is computed, the position of electrodes and the voltage to be applied are optimized using the 3D model and a downloadable treatment plan is made available to the user.

  14. An Improved Single-Step Cloning Strategy Simplifies the Agrobacterium tumefaciens-Mediated Transformation (ATMT)-Based Gene-Disruption Method for Verticillium dahliae.

    PubMed

    Wang, Sheng; Xing, Haiying; Hua, Chenlei; Guo, Hui-Shan; Zhang, Jie

    2016-06-01

    The soilborne fungal pathogen Verticillium dahliae infects a broad range of plant species to cause severe diseases. The availability of Verticillium genome sequences has provided opportunities for large-scale investigations of individual gene function in Verticillium strains using Agrobacterium tumefaciens-mediated transformation (ATMT)-based gene-disruption strategies. Traditional ATMT vectors require multiple cloning steps and elaborate characterization procedures to achieve successful gene replacement; thus, these vectors are not suitable for high-throughput ATMT-based gene deletion. Several advancements have been made that either involve simplification of the steps required for gene-deletion vector construction or increase the efficiency of the technique for rapid recombinant characterization. However, an ATMT binary vector that is both simple and efficient is still lacking. Here, we generated a USER-ATMT dual-selection (DS) binary vector, which combines both the advantages of the USER single-step cloning technique and the efficiency of the herpes simplex virus thymidine kinase negative-selection marker. Highly efficient deletion of three different genes in V. dahliae using the USER-ATMT-DS vector enabled verification that this newly-generated vector not only facilitates the cloning process but also simplifies the subsequent identification of fungal homologous recombinants. The results suggest that the USER-ATMT-DS vector is applicable for efficient gene deletion and suitable for large-scale gene deletion in V. dahliae.

  15. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  16. Newly Elaborated Multipurpose Polymer Electrolyte Encompassing RTILs for Smart Energy-Efficient Devices.

    PubMed

    Nair, Jijeesh R; Porcarelli, Luca; Bella, Federico; Gerbaldi, Claudio

    2015-06-17

    Profoundly ion-conducting, self-standing, and tack-free ethylene oxide-based polymer electrolytes encompassing a room-temperature ionic liquid (RTIL) with specific amounts of lithium salt are successfully prepared via a rapid and easily upscalable process including a UV irradiation step. All prepared materials are thoroughly characterized in terms of their physical, chemical, and morphological properties and eventually galvanostatically cycled in lab-scale lithium batteries (LIBs) exploiting a novel direct polymerization procedure to get intimate electrode/electrolyte interfacial characteristics. The promising multipurpose characteristics of the newly elaborated materials are demonstrated by testing them in dye-sensitized solar cells (DSSCs), where the introduction of the iodine/iodide-based redox mediator in the polymer matrix assured the functioning of a lab-scale test cell with conversion efficiency exceeding 6% at 1 sun. The reported results enlighten the promising prospects of the material to be successfully implemented as stable, durable, and efficient electrolyte in next-generation energy conversion and storage devices.

  17. Efficient extraction and preparative separation of four main isoflavonoids from Dalbergia odorifera T. Chen leaves by deep eutectic solvents-based negative pressure cavitation extraction followed by macroporous resin column chromatography.

    PubMed

    Li, Lu; Liu, Ju-Zhao; Luo, Meng; Wang, Wei; Huang, Yu-Yan; Efferth, Thomas; Wang, Hui-Mei; Fu, Yu-Jie

    2016-10-15

    In this study, green and efficient deep eutectic solvent-based negative pressure cavitation-assisted extraction (DES-NPCE) followed by macroporous resin column chromatography was developed to extract and separate four main isoflavonoids, i.e. prunetin, tectorigenin, genistein and biochanin A from Dalbergia odorifera T. Chen leaves. The extraction procedure was optimized systematically by single-factor experiments and a Box-Behnken experimental design combined with response surface methodology. The maximum extraction yields of prunetin, tectorigenin, genistein and biochanin A reached 1.204, 1.057, 0.911 and 2.448mg/g dry weight, respectively. Moreover, the direct enrichment and separation of four isoflavonoids in DES extraction solution was successfully achieved by macroporous resin AB-8 with recovery yields of more than 80%. The present study provides a convenient and efficient method for the green extraction and preparative separation of active compounds from plants. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Interactive Reference Point Procedure Based on the Conic Scalarizing Function

    PubMed Central

    2014-01-01

    In multiobjective optimization methods, multiple conflicting objectives are typically converted into a single objective optimization problem with the help of scalarizing functions. The conic scalarizing function is a general characterization of Benson proper efficient solutions of non-convex multiobjective problems in terms of saddle points of scalar Lagrangian functions. This approach preserves convexity. The conic scalarizing function, as a part of a posteriori or a priori methods, has successfully been applied to several real-life problems. In this paper, we propose a conic scalarizing function based interactive reference point procedure where the decision maker actively takes part in the solution process and directs the search according to her or his preferences. An algorithmic framework for the interactive solution of multiple objective optimization problems is presented and is utilized for solving some illustrative examples. PMID:24723795

  19. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  20. Navier-Stokes and viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.; Vandalsem, William R.

    1989-01-01

    Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.

  1. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  2. Efficient Procedure for the Numerical Calculation of Harmonic Vibrational Frequencies Based on Internal Coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less

  3. PHYSICO: An UNIX based Standalone Procedure for Computation of Individual and Group Properties of Protein Sequences.

    PubMed

    Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K

    2014-01-01

    In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in.

  4. PHYSICO: An UNIX based Standalone Procedure for Computation of Individual and Group Properties of Protein Sequences

    PubMed Central

    Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K

    2014-01-01

    In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. Availability PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in PMID:24616564

  5. Hybrid Particle Swarm Optimization for Hybrid Flowshop Scheduling Problem with Maintenance Activities

    PubMed Central

    Li, Jun-qing; Pan, Quan-ke; Mao, Kun

    2014-01-01

    A hybrid algorithm which combines particle swarm optimization (PSO) and iterated local search (ILS) is proposed for solving the hybrid flowshop scheduling (HFS) problem with preventive maintenance (PM) activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron's benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm. PMID:24883414

  6. Efficiency and multifractality analysis of CSI 300 based on multifractal detrending moving average algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Dang, Yaoguo; Gu, Rongbao

    2013-03-01

    We apply the multifractal detrending moving average (MFDMA) to investigate and compare the efficiency and multifractality of 5-min high-frequency China Securities Index 300 (CSI 300). The results show that the CSI 300 market becomes closer to weak-form efficiency after the introduction of CSI 300 future. We find that the CSI 300 is featured by multifractality and there are less complexity and risk after the CSI 300 index future was introduced. With the shuffling, surrogating and removing extreme values procedures, we unveil that extreme events and fat-distribution are the main origin of multifractality. Besides, we discuss the knotting phenomena in multifractality, and find that the scaling range and the irregular fluctuations for large scales in the Fq(s) vs s plot can cause a knot.

  7. Conversion of Fibroblasts to Parvalbumin Neurons by One Transcription Factor, Ascl1, and the Chemical Compound Forskolin*

    PubMed Central

    Shi, Zixiao; Zhang, Juan; Chen, Shuangquan; Li, Yanxin; Lei, Xuepei; Qiao, Huimin; Zhu, Qianwen; Hu, Baoyang; Zhou, Qi; Jiao, Jianwei

    2016-01-01

    Abnormalities in parvalbumin (PV)-expressing interneurons cause neurodevelopmental disorders such as epilepsy, autism, and schizophrenia. Unlike other types of neurons that can be efficiently differentiated from pluripotent stem cells, PV neurons were minimally generated using a conventional differentiation strategy. In this study we developed an adenovirus-based transdifferentiation strategy that incorporates an additional chemical compound for the efficient generation of induced PV (iPV) neurons. The chemical compound forskolin combined with Ascl1 induced ∼80% of mouse fibroblasts to iPV neurons. The iPV neurons generated by this procedure matured 5–7 days post infection and were characterized by electrophysiological properties and known neuronal markers, such as PV and GABA. Our studies, therefore, identified an efficient approach for generating PV neurons. PMID:27137935

  8. Nonlocal continuum analysis of a nonlinear uniaxial elastic lattice system under non-uniform axial load

    NASA Astrophysics Data System (ADS)

    Hérisson, Benjamin; Challamel, Noël; Picandet, Vincent; Perrot, Arnaud

    2016-09-01

    The static behavior of the Fermi-Pasta-Ulam (FPU) axial chain under distributed loading is examined. The FPU system examined in the paper is a nonlinear elastic lattice with linear and quadratic spring interaction. A dimensionless parameter controls the possible loss of convexity of the associated quadratic and cubic energy. Exact analytical solutions based on Hurwitz zeta functions are developed in presence of linear static loading. It is shown that this nonlinear lattice possesses scale effects and possible localization properties in the absence of energy convexity. A continuous approach is then developed to capture the main phenomena observed regarding the discrete axial problem. The associated continuum is built from a continualization procedure that is mainly based on the asymptotic expansion of the difference operators involved in the lattice problem. This associated continuum is an enriched gradient-based or nonlocal axial medium. A Taylor-based and a rational differential method are both considered in the continualization procedures to approximate the FPU lattice response. The Padé approximant used in the continualization procedure fits the response of the discrete system efficiently, even in the vicinity of the limit load when the non-convex FPU energy is examined. It is concluded that the FPU lattice system behaves as a nonlocal axial system in dynamic but also static loading.

  9. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    PubMed

    Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. 77 FR 13887 - Energy Conservation Program: Test Procedures for Residential Clothes Washers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ... Energy Conservation Program: Test Procedures for Residential Clothes Washers; Final Rule #0;#0;Federal... Conservation Program: Test Procedures for Residential Clothes Washers AGENCY: Office of Energy Efficiency and...) establishes new test procedures for residential clothes washers under the Energy Policy and Conservation Act...

  11. 77 FR 24341 - Energy Conservation Program: Test Procedures for Residential Clothes Washers; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-24

    ... Program: Test Procedures for Residential Clothes Washers; Correction AGENCY: Office of Energy Efficiency... final rule establishing new and amended test procedures for residential clothes washers, published in... Energy (DOE) erroneously referenced the new test procedure, rather than the currently effective test...

  12. Simultaneous Inference For The Mean Function Based on Dense Functional Data

    PubMed Central

    Cao, Guanqun; Yang, Lijian; Todem, David

    2012-01-01

    A polynomial spline estimator is proposed for the mean function of dense functional data together with a simultaneous confidence band which is asymptotically correct. In addition, the spline estimator and its accompanying confidence band enjoy oracle efficiency in the sense that they are asymptotically the same as if all random trajectories are observed entirely and without errors. The confidence band is also extended to the difference of mean functions of two populations of functional data. Simulation experiments provide strong evidence that corroborates the asymptotic theory while computing is efficient. The confidence band procedure is illustrated by analyzing the near infrared spectroscopy data. PMID:22665964

  13. Dermatology practice management enhancement: implications for dermatology in the age of managed care.

    PubMed

    Nestor, M S

    2000-09-01

    Health care delivery in the United States has changed dramatically during the past 10 years. Dermatologists are faced with challenging changes in the way they learn new skills, practice, and provide dermatologic care. Dermatologists can survive and flourish in this environment if they learn proper management and enhancement skills. These skills include proper coding and documentation, regulatory compliance, and new levels of practice effectiveness and efficiency. Dermatologists can offer also the benefit of cosmetic procedures and ethical office-based dispensing to their patients. Greater future unification of this specialty will allow dermatology to flourish and show its need, efficiency, and cost-effectiveness.

  14. A route for efficient non-resonance cloaking by using multilayer dielectric coating

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Semouchkina, Elena

    2013-03-01

    An approach for designing transmission cloaks by using ordinary dielectrics instead of meta- and plasmonic materials is proposed and demonstrated by the development of a multi-layer cloak for hiding cylindrical objects larger than the wavelengths of incident radiation. The parameters of the cloak layers were found by using the Genetic Algorithm-based optimization procedure, which employed the reciprocal of total scattering cross width of the cloaked target, derived from the solution of the Helmholtz equation, as the fitness function. The proposed cloak demonstrated better cloaking efficiency than did a similarly sized metamaterial cloak designed by using the transformation optics relations.

  15. NASA satellite communications application research, phase 2 addendum. Efficient high power, solid state amplifier for EHF communications

    NASA Technical Reports Server (NTRS)

    Benet, James

    1994-01-01

    This document is an addendum to the NASA Satellite Communications Application Research (SCAR) Phase 2 Final Report, 'Efficient High Power, Solid State Amplifier for EHF Communications.' This report describes the work performed from 1 August 1993 to 11 March 1994, under contract number NASW-4513. During this reporting period an array of transistor amplifiers was repaired by replacing all MMIC amplifier chips. The amplifier array was then tested using three different feedhorn configurations. Descriptions, procedures, and results of this testing are presented in this report, and conclusions are drawn based on the test results obtained.

  16. Burst switching without guard interval in all-optical software-define star intra-data center network

    NASA Astrophysics Data System (ADS)

    Ji, Philip N.; Wang, Ting

    2014-02-01

    Optical switching has been introduced in intra-data center networks (DCNs) to increase capacity and to reduce power consumption. Recently we proposed a star MIMO OFDM-based all-optical DCN with burst switching and software-defined networking. Here, we introduce the control procedure for the star DCN in detail for the first time. The timing, signaling, and operation are described for each step to achieve efficient bandwidth resource utilization. Furthermore, the guidelines for the burst assembling period selection that allows burst switching without guard interval are discussed. The star all-optical DCN offers flexible and efficient control for next-generation data center application.

  17. Payment reform to finance a medical home: comment on "Achieving cost control, care coordination, and quality improvement through incremental payment system reform".

    PubMed

    McGuire, Thomas G

    2010-01-01

    This commentary on R. F. Averill et al. (2010) addresses their idea of risk and quality adjusting fee-for-service payments to primary care physicians in order to improve the efficiency of primary care and take a step toward financing a "medical home"for patients. I show how their idea can create incentives for efficient practice styles. Pairing this with an active beneficiary choice of primary care physician with an enrollment fee would make the idea easier to implement and provide an incentive and the financing for elements of service not covered by procedure-based fees.

  18. Efficient genome engineering of a virulent Klebsiella bacteriophage using CRISPR-Cas9.

    PubMed

    Shen, Juntao; Zhou, Jinjie; Chen, Guo-Qiang; Xiu, Zhi-Long

    2018-06-13

    Klebsiella pneumoniae is one of the most common nosocomial opportunistic pathogens usually with multiple drug-resistance. Phage therapy, a potential new therapeutics to replace or supplement antibiotics, has attracted much attention. However, very few Klebsiella phages have been well-characterized as the lack of efficient genome editing tools. Here, Cas9 from Streptococcus pyogenes and a single guide RNA (sgRNA) were used to modify a virulent Klebsiella bacteriophage phiKpS2. We firstly evaluated the distribution of sgRNA activity in phages and proved that it's largely inconsistent with the predicted activity from current models trained on eukaryotic cell datasets. A simple CRISPR-based phage genome editing procedure was developed based on the discovery that homologous arms as short as 30-60 bp was sufficient to introduce point mutation, gene deletion and swap. We also demonstrated that weak sgRNAs could be used for precise phage genome editing but failed to select random recombinants, possibly because inefficient cleavage can be tolerated through continuous repair by homologous recombination with the uncut genomes. Small frameshift deletion was proved to be an efficient way to evaluate the essentiality of phage genes. By using the above strategies, a putative promoter and nine genes of phiKpS2 were successfully deleted. Interestingly, the holin gene can be deleted with little effect on phiKpS2 infection, but the reason is not yet clear. This study established an efficient, time-saving, and cost-effective procedure for phage genome editing, which is expected to significantly promote the development of bacteriophage therapy. IMPORTANCE In the present study, we have addressed an efficient, time-saving and cost-effective CRISPR-based phage genome editing of Klebsiella phage, which has the potential to significantly expand our knowledge of phage-host interactions and to promote the applications of phage therapy. The distribution of sgRNA activity was first evaluated in phages. Short homologous arms were proved enough to introduce point mutation, small frameshift deletion, gene deletion and swap into phages, and weak sgRNAs were proved useful for precise phage genome editing but failed to select random recombinants, which all make the CRISPR-based phage genome editing easier to use. Copyright © 2018 American Society for Microbiology.

  19. Improved adjoin-list for quality-guided phase unwrapping based on red-black trees

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Rueda-Paz, Juvenal; Redondo-Galvan, Arturo

    2016-08-01

    The quality-guide phase unwrapping is an important technique that is based on quality maps which guide the unwrapping process. The efficiency of this technique depends in the adjoin-list data structure implementation. There exists several proposals that improve the adjoin-list; Ming Zhao et. al. proposed an Indexed Interwoven Linked List (I2L2) that is based on dividing the quality values into intervals of equal size and inserting in a linked list those pixels with quality values within a certain interval. Ming Zhao and Qian Kemao proposed an improved I2L2 replacing each linked list in each interval by a heap data structure, which allows efficient procedures for insertion and deletion. In this paper, we propose an improved I2L2 which uses Red-Black trees (RBT) data structures for each interval. Our proposal has as main goal to avoid the unbalanced properties of the head and thus, reducing the time complexity of insertion. In order to maintain the same efficiency of the heap when deleting an element, we provide an efficient way to remove the pixel with the highest quality value in the RBT using a pointer to the rightmost element in the tree. We also provide a new partition strategy of the phase values that is based on a density criterion. Experimental results applied to phase shifting profilometry are shown for large images.

  20. Mechanical Energy Harvesting Performance of Ferroelectric Polymer Nanowires Grown via Template‐Wetting

    PubMed Central

    Whiter, Richard A.; Boughey, Chess; Smith, Michael

    2018-01-01

    Abstract Nanowires of the ferroelectric co‐polymer poly(vinylidenefluoride‐co‐triufloroethylene) [P(VDF‐TrFE)] are fabricated from solution within nanoporous templates of both “hard” anodic aluminium oxide (AAO) and “soft” polyimide (PI) through a facile and scalable template‐wetting process. The confined geometry afforded by the pores of the templates leads directly to highly crystalline P(VDF‐TrFE) nanowires in a macroscopic “poled” state that precludes the need for external electrical poling procedure typically required for piezoelectric performance. The energy‐harvesting performance of nanogenerators based on these template‐grown nanowires are extensively studied and analyzed in combination with finite element modelling. Both experimental results and computational models probing the role of the templates in determining overall nanogenerator performance, including both materials and device efficiencies, are presented. It is found that although P(VDF‐TrFE) nanowires grown in PI templates exhibit a lower material efficiency due to lower crystallinity as compared to nanowires grown in AAO templates, the overall device efficiency was higher for the PI‐template‐based nanogenerator because of the lower stiffness of the PI template as compared to the AAO template. This work provides a clear framework to assess the energy conversion efficiency of template‐grown piezoelectric nanowires and paves the way towards optimization of template‐based nanogenerator devices.

  1. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  2. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  3. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  4. European Union centralised procedure for marketing authorisation of oncology drugs: an in-depth review of its efficiency.

    PubMed

    Netzer, Tilo

    2006-03-01

    In the European Union (EU) 20 anticancer agents have been successfully authorised via the Centralised Procedure since its implementation in 1995. Public information on these 20 agents has been reviewed in order to evaluate the effectiveness of the available regulatory mechanisms to facilitate the marketing authorisation of such drugs in the EU. These mechanisms include orphan drug legislation, exceptional circumstances provision and the accelerated evaluation procedure. Based on the fact that the EU orphan drug legislation was not implemented before the year 2000 no conclusions on its effectiveness to facilitate oncology drug development can be drawn today. Much more data are available on the effects of the exceptional circumstances provision, which was used in 6 out of 10 cases over the past four years. An analysis of the clinical data packages indicates that this provision allows authorisation of innovative oncology drugs based on smaller clinical data sets than required for full approval. The accelerated evaluation procedure was used in only one case and significantly reduced the scientific review time at the EU agencies. However, this mechanism does not influence the administrative time at the authorities, which accounted for almost one-third of the overall duration of the EU marketing authorisation procedures for oncology drugs. Revision of the EU drug legislation brings about some changes to the above-described provisions, with the potential for an improvement in the current situation. Thus, its implementation offers the chance to reduce the time that innovative oncology agents take to reach the market, although -- based on experience with the current procedures -- more effort is likely to be required to achieve this goal.

  5. Development of coring procedures applied to Si, CdTe, and CIGS solar panels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moutinho, H. R.; Johnston, S.; To, B.

    Most of the research on the performance and degradation of photovoltaic modules is based on macroscale measurements of device parameters such as efficiency, fill factor, open-circuit voltage, and short-circuit current. Our goal is to develop the capabilities to allow us to study the degradation of these parameters in the micro- and nanometer scale and to relate our results to performance parameters. To achieve this objective, the first step is to be able to access small samples from specific areas of the solar panels without changing the properties of the material. In this paper, we describe two coring procedures that wemore » developed and applied to Si, CIGS, and CdTe solar panels. In the first procedure, we cored full samples, whereas in the second we performed a partial coring that keeps the tempered glass intact. The cored samples were analyzed by different analytical techniques before and after coring, at the same locations, and no damage during the coring procedure was observed.« less

  6. Development and applications of algorithms for calculating the transonic flow about harmonically oscillating wings

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Weatherill, W. H.; Yip, E. L.

    1984-01-01

    A finite difference method to solve the unsteady transonic flow about harmonically oscillating wings was investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady velocity potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. An alternating direction implicit procedure was investigated, and a pilot program was developed for both two and three dimensional wings. This program provides a relatively efficient relaxation solution without previously encountered solution instability problems. Pressure distributions for two rectangular wings are calculated. Conjugate gradient techniques were developed for the asymmetric, indefinite problem. The conjugate gradient procedure is evaluated for applications to the unsteady transonic problem. Different equations for the alternating direction procedure are derived using a coordinate transformation for swept and tapered wing planforms. Pressure distributions for swept, untaped wings of vanishing thickness are correlated with linear results for sweep angles up to 45 degrees.

  7. Development of coring procedures applied to Si, CdTe, and CIGS solar panels

    DOE PAGES

    Moutinho, H. R.; Johnston, S.; To, B.; ...

    2018-01-04

    Most of the research on the performance and degradation of photovoltaic modules is based on macroscale measurements of device parameters such as efficiency, fill factor, open-circuit voltage, and short-circuit current. Our goal is to develop the capabilities to allow us to study the degradation of these parameters in the micro- and nanometer scale and to relate our results to performance parameters. To achieve this objective, the first step is to be able to access small samples from specific areas of the solar panels without changing the properties of the material. In this paper, we describe two coring procedures that wemore » developed and applied to Si, CIGS, and CdTe solar panels. In the first procedure, we cored full samples, whereas in the second we performed a partial coring that keeps the tempered glass intact. The cored samples were analyzed by different analytical techniques before and after coring, at the same locations, and no damage during the coring procedure was observed.« less

  8. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  9. Characterization of the olfactory impact around a wastewater treatment plant: optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant.

  10. Characterization of the olfactory impact around a wastewater treatment plant: Optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Coll Lozano, Caterina

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H 2 S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods of at least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H 2 S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0 ± 1.8%, seems not to depend on the amount of H 2 S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H 2 S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H 2 S emissions are dominant. [Box: see text].

  11. Mapping proteins to disease terminologies: from UniProt to MeSH

    PubMed Central

    Mottaz, Anaïs; Yip, Yum L; Ruch, Patrick; Veuthey, Anne-Lise

    2008-01-01

    Background Although the UniProt KnowledgeBase is not a medical-oriented database, it contains information on more than 2,000 human proteins involved in pathologies. However, these annotations are not standardized, which impairs the interoperability between biological and clinical resources. In order to make these data easily accessible to clinical researchers, we have developed a procedure to link diseases described in the UniProtKB/Swiss-Prot entries to the MeSH disease terminology. Results We mapped disease names extracted either from the UniProtKB/Swiss-Prot entry comment lines or from the corresponding OMIM entry to the MeSH. Different methods were assessed on a benchmark set of 200 disease names manually mapped to MeSH terms. The performance of the retained procedure in term of precision and recall was 86% and 64% respectively. Using the same procedure, more than 3,000 disease names in Swiss-Prot were mapped to MeSH with comparable efficiency. Conclusions This study is a first attempt to link proteins in UniProtKB to the medical resources. The indexing we provided will help clinicians and researchers navigate from diseases to genes and from genes to diseases in an efficient way. The mapping is available at: . PMID:18460185

  12. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function.

    PubMed

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.

  13. Procedures to develop a computerized adaptive test to assess patient-reported physical functioning.

    PubMed

    McCabe, Erin; Gross, Douglas P; Bulut, Okan

    2018-06-07

    The purpose of this paper is to demonstrate the procedures to develop and implement a computerized adaptive patient-reported outcome (PRO) measure using secondary analysis of a dataset and items from fixed-format legacy measures. We conducted secondary analysis of a dataset of responses from 1429 persons with work-related lower extremity impairment. We calibrated three measures of physical functioning on the same metric, based on item response theory (IRT). We evaluated efficiency and measurement precision of various computerized adaptive test (CAT) designs using computer simulations. IRT and confirmatory factor analyses support combining the items from the three scales for a CAT item bank of 31 items. The item parameters for IRT were calculated using the generalized partial credit model. CAT simulations show that reducing the test length from the full 31 items to a maximum test length of 8 items, or 20 items is possible without a significant loss of information (95, 99% correlation with legacy measure scores). We demonstrated feasibility and efficiency of using CAT for PRO measurement of physical functioning. The procedures we outlined are straightforward, and can be applied to other PRO measures. Additionally, we have included all the information necessary to implement the CAT of physical functioning in the electronic supplementary material of this paper.

  14. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061

  15. Efficient Coupling of Fluid-Plasma and Monte-Carlo-Neutrals Models for Edge Plasma Transport

    NASA Astrophysics Data System (ADS)

    Dimits, A. M.; Cohen, B. I.; Friedman, A.; Joseph, I.; Lodestro, L. L.; Rensink, M. E.; Rognlien, T. D.; Sjogreen, B.; Stotler, D. P.; Umansky, M. V.

    2017-10-01

    UEDGE has been valuable for modeling transport in the tokamak edge and scrape-off layer due in part to its efficient fully implicit solution of coupled fluid neutrals and plasma models. We are developing an implicit coupling of the kinetic Monte-Carlo (MC) code DEGAS-2, as the neutrals model component, to the UEDGE plasma component, based on an extension of the Jacobian-free Newton-Krylov (JFNK) method to MC residuals. The coupling components build on the methods and coding already present in UEDGE. For the linear Krylov iterations, a procedure has been developed to ``extract'' a good preconditioner from that of UEDGE. This preconditioner may also be used to greatly accelerate the convergence rate of a relaxed fixed-point iteration, which may provide a useful ``intermediate'' algorithm. The JFNK method also requires calculation of Jacobian-vector products, for which any finite-difference procedure is inaccurate when a MC component is present. A semi-analytical procedure that retains the standard MC accuracy and fully kinetic neutrals physics is therefore being developed. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 15-ERD-059, by PPPL under Contract DE-AC02-09CH11466, and supported in part by the U.S. DOE, OFES.

  16. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  17. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  18. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  19. On the design of flight-deck procedures

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Wiener, Earl L.

    1994-01-01

    In complex human-machine systems, operations, training, and standardization depend on a elaborate set of procedures which are specified and mandated by the operational management of the organization. The intent is to provide guidance to the pilots, to ensure a logical, efficient, safe, and predictable means of carrying out the mission objectives. In this report the authors examine the issue of procedure use and design from a broad viewpoint. The authors recommend a process which we call 'The Four P's:' philosophy, policies, procedures, and practices. We believe that if an organization commits to this process, it can create a set of procedures that are more internally consistent, less confusing, better respected by the flight crews, and that will lead to greater conformity. The 'Four-P' model, and the guidelines for procedural development in appendix 1, resulted from cockpit observations, extensive interviews with airline management and pilots, interviews and discussion at one major airframe manufacturer, and an examination of accident and incident reports. Although this report is based on airline operations, we believe that the principles may be applicable to other complex, high-risk systems, such as nuclear power production, manufacturing process control, space flight, and military operations.

  20. 76 FR 65631 - Energy Conservation Program: Test Procedures for Microwave Ovens

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-24

    ... Conservation Program: Test Procedures for Microwave Ovens AGENCY: Office of Energy Efficiency and Renewable... (DOE) has initiated a test procedure rulemaking to develop active mode testing methodologies for... Federal Register a final rule for the microwave oven test procedure rulemaking (July TP repeal final rule...

  1. 78 FR 63823 - Energy Conservation Program: Test Procedures for Television Sets

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-25

    ... Conservation Program: Test Procedures for Television Sets AGENCY: Office of Energy Efficiency and Renewable... Energy (DOE) issued a notice of proposed rulemaking (NOPR) to establish a new test procedure for... additional testing and proposed amendments to the TV test procedure in its March 12, 2013 supplemental notice...

  2. 78 FR 4015 - Energy Conservation Program: Test Procedures for Microwave Ovens

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-18

    ... Conservation Program: Test Procedures for Microwave Ovens AGENCY: Office of Energy Efficiency and Renewable... Energy (DOE) issued a supplemental notice of proposed rulemaking (SNOPR) to amend the test procedures for microwave ovens. That SNOPR proposed amendments to the DOE test procedure to incorporate provisions from the...

  3. 78 FR 62488 - Energy Conservation Program: Compliance Date for the Dehumidifier Test Procedure

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... Conservation Program: Compliance Date for the Dehumidifier Test Procedure AGENCY: Office of Energy Efficiency.... Department of Energy (DOE) proposes to revise the compliance date for the dehumidifier test procedures... manufacturers to test using only the active mode provisions in the test procedure for dehumidifiers currently...

  4. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  5. Evaluation of Superparamagnetic Silica Nanoparticles for Extraction of Triazines in Magnetic in-Tube Solid Phase Microextraction Coupled to Capillary Liquid Chromatography

    PubMed Central

    González-Fuenzalida, R. A.; Moliner-Martínez, Y.; Prima-Garcia, Helena; Ribera, Antonio; Campins-Falcó, P.; Zaragozá, Ramon J.

    2014-01-01

    The use of magnetic nanomaterials for analytical applications has increased in the recent years. In particular, magnetic nanomaterials have shown great potential as adsorbent phase in several extraction procedures due to the significant advantages over the conventional methods. In the present work, the influence of magnetic forces over the extraction efficiency of triazines using superparamagnetic silica nanoparticles (NPs) in magnetic in tube solid phase microextraction (Magnetic-IT-SPME) coupled to CapLC has been evaluated. Atrazine, terbutylazine and simazine has been selected as target analytes. The superparamagnetic silica nanomaterial (SiO2-Fe3O4) deposited onto the surface of a capillary column gave rise to a magnetic extraction phase for IT-SPME that provided a enhancemment of the extraction efficiency for triazines. This improvement is based on two phenomena, the superparamegnetic behavior of Fe3O4 NPs and the diamagnetic repulsions that take place in a microfluidic device such a capillary column. A systematic study of analytes adsorption and desorption was conducted as function of the magnetic field and the relationship with triazines magnetic susceptibility. The positive influence of magnetism on the extraction procedure was demonstrated. The analytical characteristics of the optimized procedure were established and the method was applied to the determination of the target analytes in water samples with satisfactory results. When coupling Magnetic-IT-SPME with CapLC, improved adsorption efficiencies (60%–63%) were achieved compared with conventional adsorption materials (0.8%–3%). PMID:28344221

  6. [The most effective dosage in the administration of PGF2-alpha for interruption of pregnancy during the 2d trimester].

    PubMed

    Herczeg, J; Szontágh, F

    1974-06-23

    Artificial interruption of pregnancy contains too many risks from the 12th week of pregnancy. The authors have been working at finding the most suitable and effective dosage of prostaglandin for the interruption of pregnancy during the 2nd trimester. The new dosage experimented was 25 mg of prostaglandin F2alpha, followed by another 25 mg 6 hours later. The clinical efficiency of this dosage was tested. This procedures was used in 45 cases. The efficiency of the method was compared to the efficiency of the previously used dosage, which was 25 mg of prostaglandin F2alpha, followed by 25 mg 24 hours later. The new dosage was evaluated 91% efficient, while the previous dosage was found to be 75% efficient. The side effects were rated as acceptable by the patients. There was no case of infection. Two undeniable advantages were found with this new dosage: the duration of the actual procedure is considerably reduced, and the method appears to be much safer. The authors conclude that this new procedure offers numerous clinical advantages.

  7. Invert biopanning: A novel method for efficient and rapid isolation of scFvs by phage display technology.

    PubMed

    Rahbarnia, Leila; Farajnia, Safar; Babaei, Hossein; Majidi, Jafar; Veisi, Kamal; Tanomand, Asghar; Akbari, Bahman

    2016-11-01

    Phage display is a prominent screening technique for development of novel high affinity antibodies against almost any antigen. However, removing false positive clones in screening process remains a challenge. The aim of this study was to develop an efficient and rapid method for isolation of high affinity scFvs by removing NSBs without losing rare specific clones. Therefore, a novel two rounds strategy called invert biopanning was developed for isolating high affinity scFvs against EGFRvIII antigen from human scFv library. The efficiency of invert biopanning method (procedure III) was analyzed by comparing with results of conventional biopanning methods (procedures I and II). According to the results of polyclonal ELISA, the second round of procedure III displayed highest binding affinity against EGFRvIII peptide accompanied by lowest NSB comparing to other two procedures. Several positive clones were identified among output phages of procedure III by monoclonal phage ELISA which displayed high affinity to EGFRvIII antigen. In conclusion, results of our study indicate that invert biopanning is an efficient method for avoiding NSBs and conservation of rare specific clones during screening of a scFv phage library. Novel anti EGFRvIII scFv isolated could be a promising candidate for potential use in treatment of EGFRvIII expressing cancers. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  8. A new design approach to innovative spectrometers. Case study: TROPOLITE

    NASA Astrophysics Data System (ADS)

    Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob

    2014-05-01

    Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.

  9. Evaluating the Relationship between Productivity and Quality in Emergency Departments

    PubMed Central

    Bastian, Nathaniel D.; Riordan, John P.

    2017-01-01

    Background In the United States, emergency departments (EDs) are constantly pressured to improve operational efficiency and quality in order to gain financial benefits and maintain a positive reputation. Objectives The first objective is to evaluate how efficiently EDs transform their input resources into quality outputs. The second objective is to investigate the relationship between the efficiency and quality performance of EDs and the factors affecting this relationship. Methods Using two data sources, we develop a data envelopment analysis (DEA) model to evaluate the relative efficiency of EDs. Based on the DEA result, we performed multinomial logistic regression to investigate the relationship between ED efficiency and quality performance. Results The DEA results indicated that the main source of inefficiencies was working hours of technicians. The multinomial logistic regression result indicated that the number of electrocardiograms and X-ray procedures conducted in the ED and the length of stay were significantly associated with the trade-offs between relative efficiency and quality. Structural ED characteristics did not influence the relationship between efficiency and quality. Conclusions Depending on the structural and operational characteristics of EDs, different factors can affect the relationship between efficiency and quality. PMID:29065673

  10. Non-parametric diffeomorphic image registration with the demons algorithm.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2007-01-01

    We propose a non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. The demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. The main idea of our algorithm is to adapt this procedure to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of free form deformations by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the true ones in terms of Jacobians.

  11. Interdisciplinary analysis procedures in the modeling and control of large space-based structures

    NASA Technical Reports Server (NTRS)

    Cooper, Paul A.; Stockwell, Alan E.; Kim, Zeen C.

    1987-01-01

    The paper describes a computer software system called the Integrated Multidisciplinary Analysis Tool, IMAT, that has been developed at NASA Langley Research Center. IMAT provides researchers and analysts with an efficient capability to analyze satellite control systems influenced by structural dynamics. Using a menu-driven interactive executive program, IMAT links a relational database to commercial structural and controls analysis codes. The paper describes the procedures followed to analyze a complex satellite structure and control system. The codes used to accomplish the analysis are described, and an example is provided of an application of IMAT to the analysis of a reference space station subject to a rectangular pulse loading at its docking port.

  12. Materials with periodic internal structure: Computation based on homogenization and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Jansson, S.; Leckie, F. A.; Onat, E. T.; Ranaweera, M. P.

    1990-01-01

    The combination of thermal and mechanical loading expected in practice means that constitutive equations of metal matrix composites must be developed which deal with time-independent and time-dependent irreversible deformation. Also, the internal state of composites is extremely complicated which underlines the need to formulate macroscopic constitutive equations with a limited number of state variables which represent the internal state at the micro level. One available method for calculating the macro properties of composites in terms of the distribution and properties of the constituent materials is the method of homogenization whose formulation is based on the periodicity of the substructure of the composite. A homogenization procedure was developed which lends itself to the use of the finite element procedure. The efficiency of these procedures, to determine the macroscopic properties of a composite system from its constituent properties, was demonstrated utilizing an aluminum plate perforated by directionally oriented slits. The selection of this problem is based on the fact that, extensive experimental results exist, the macroscopic response is highly anisotropic, and that the slits provide very high stress gradients which severely test the effectiveness of the computational procedures. Furthermore, both elastic and plastic properties were investigated so that the application to practical systems with inelastic deformation should be able to proceed without difficulty. The effectiveness of the procedures was rigorously checked against experimental results and with the predictions of approximate calculations. Using the computational results it is illustrated how macroscopic constitutive equations can be expressed in forms of the elastic and limit load behavior.

  13. Silacyclobutane-based diblock copolymers with vinylferrocene, ferrocenylmethyl methacrylate, and [1]dimethylsilaferrocenophane.

    PubMed

    Gallei, Markus; Tockner, Stefan; Klein, Roland; Rehahn, Matthias

    2010-05-12

    Well-defined diblock copolymers have been prepared in which three different ferrocene-based monomers are combined with 1,1-dimethylsilacyclobutane (DMSB) and 1-methylsilacyclobutane, respectively, as their carbosilane counterparts. Optimized procedures are reported for the living anionic chain growth following sequential monomer addition protocols, ensuring narrow polydispersities and high blocking efficiencies. The DMSB-containing copolymers show phase segregation in the bulk state, leading to micromorphologies composed of crystalline DMSB phases and amorphous polymetallocene phases. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Pyrrole based Schiff bases as colorimetric and fluorescent chemosensors for fluoride and hydroxide anions.

    PubMed

    Velmathi, Sivan; Reena, Vijayaraghavan; Suganya, Sivalingam; Anandan, Sambandam

    2012-01-01

    An efficient colorimetric sensor with pyrrole-NH moiety as binding site and nitro group as a signaling unit has been synthesized by a one step procedure and characterized by spectroscopic techniques, which displays excellent selectivity and sensitivity for fluoride and hydroxide ions. The hydrogen bonding with these anions provides remarkable colorimetric responses. (1)H NMR and FT IR studies has been carried out to confirm the hydrogen bonding. UV-vis and fluorescence spectral changes can be exploited for real time and on site application.

  15. One-Pot Conversion of Epoxidized Soybean Oil (ESO) into Soy-Based Polyurethanes by MoCl₂O₂ Catalysis.

    PubMed

    Pantone, Vincenzo; Annese, Cosimo; Fusco, Caterina; Fini, Paola; Nacci, Angelo; Russo, Antonella; D'Accolti, Lucia

    2017-02-21

    An innovative and eco-friendly one-pot synthesis of bio-based polyurethanes is proposed via the epoxy-ring opening of epoxidized soybean oil (ESO) with methanol, followed by the reaction of methoxy bio-polyols intermediates with 2,6-tolyl-diisocyanate (TDI). Both synthetic steps, methanolysis and polyurethane linkage formation, are promoted by a unique catalyst, molybdenum(VI) dichloride dioxide (MoCl₂O₂), which makes this procedure an efficient, cost-effective, and environmentally safer method amenable to industrial scale-up.

  16. An orbital localization criterion based on the theory of "fuzzy" atoms.

    PubMed

    Alcoba, Diego R; Lain, Luis; Torre, Alicia; Bochicchio, Roberto C

    2006-04-15

    This work proposes a new procedure for localizing molecular and natural orbitals. The localization criterion presented here is based on the partitioning of the overlap matrix into atomic contributions within the theory of "fuzzy" atoms. Our approach has several advantages over other schemes: it is computationally inexpensive, preserves the sigma/pi-separability in planar systems and provides a straightforward interpretation of the resulting orbitals in terms of their localization indices and atomic occupancies. The corresponding algorithm has been implemented and its efficiency tested on selected molecular systems. (c) 2006 Wiley Periodicals, Inc.

  17. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  18. Low latency messages on distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Saltz, Joel

    1993-01-01

    Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.

  19. Rapid DNA transformation in Salmonella Typhimurium by the hydrogel exposure method.

    PubMed

    Elabed, Hamouda; Hamza, Rim; Bakhrouf, Amina; Gaddour, Kamel

    2016-07-01

    Even with advances in molecular cloning and DNA transformation, new or alternative methods that permit DNA penetration in Salmonella enterica subspecies enterica serovar Typhimurium are required in order to use this pathogen in biotechnological or medical applications. In this work, an adapted protocol of bacterial transformation with plasmid DNA based on the "Yoshida effect" was applied and optimized on Salmonella enterica serovar Typhimurium LT2 reference strain. The plasmid transference based on the use of sepiolite as acicular materials to promote cell piercing via friction forces produced by spreading on the surface of a hydrogel. The transforming mixture containing sepiolite nanofibers, bacterial cells to be transformed and plasmid DNA were plated directly on selective medium containing 2% agar. In order to improve the procedure, three variables were tested and the transformation of Salmonella cells was accomplished using plasmids pUC19 and pBR322. Using the optimized protocol on Salmonella LT2 strain, the efficiency was about 10(5) transformed cells per 10(9) subjected to transformation with 0.2μg plasmid DNA. In summary, the procedure is fast, offers opportune efficiency and promises to become one of the widely used transformation methods in laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Bi2O3 nanoparticles encapsulated in surface mounted metal-organic framework thin films

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Chen, Zhi; Yang, Chengwu; Neumann, Tobias; Kübel, Christian; Wenzel, Wolfgang; Welle, Alexander; Pfleging, Wilhelm; Shekhah, Osama; Wöll, Christof; Redel, Engelbert

    2016-03-01

    We describe a novel procedure to fabricate a recyclable hybrid-photocatalyst based on Bi2O3@HKUST-1 MOF porous thin films. Bi2O3 nanoparticles (NPs) were synthesized within HKUST-1 (or Cu3(BTC)2) surface-mounted metal-organic frame-works (SURMOFs) and characterized using X-ray diffraction (XRD), a quartz crystal microbalance (QCM) and transmission electron microscopy (TEM). The Bi2O3 semiconductor NPs (diameter 1-3 nm)/SURMOF heterostructures exhibit superior photo-efficiencies compared to NPs synthesized using conventional routes, as demonstrated via the photodegradation of the nuclear fast red (NFR) dye.We describe a novel procedure to fabricate a recyclable hybrid-photocatalyst based on Bi2O3@HKUST-1 MOF porous thin films. Bi2O3 nanoparticles (NPs) were synthesized within HKUST-1 (or Cu3(BTC)2) surface-mounted metal-organic frame-works (SURMOFs) and characterized using X-ray diffraction (XRD), a quartz crystal microbalance (QCM) and transmission electron microscopy (TEM). The Bi2O3 semiconductor NPs (diameter 1-3 nm)/SURMOF heterostructures exhibit superior photo-efficiencies compared to NPs synthesized using conventional routes, as demonstrated via the photodegradation of the nuclear fast red (NFR) dye. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00532b

  1. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... state-registered professional engineer, who is qualified to perform an evaluation of electric motor... EQUIPMENT Electric Motors Test Procedures, Materials Incorporated and Methods of Determining Efficiency § 431.17 Determination of efficiency. When a party determines the energy efficiency of an electric motor...

  2. An efficient early phase 2 procedure to screen medications for efficacy in smoking cessation.

    PubMed

    Perkins, Kenneth A; Lerman, Caryn

    2014-01-01

    Initial screening of new medications for potential efficacy (i.e., Food and Drug Administration (FDA) early phase 2), such as in aiding smoking cessation, should be efficient in identifying which drugs do, or do not, warrant more extensive (and expensive) clinical testing. This focused review outlines our research on development, evaluation, and validation of an efficient crossover procedure for sensitivity in detecting medication efficacy for smoking cessation. First-line FDA-approved medications of nicotine patch, varenicline, and bupropion were tested as model drugs, in three separate placebo-controlled studies. We also tested specificity of our procedure in identifying a drug that lacks efficacy, using modafinil. This crossover procedure showed sensitivity (increased days of abstinence) during week-long "practice" quit attempts with each of the active cessation medications (positive controls) versus placebo, but not with modafinil (negative control) versus placebo, as hypothesized. Sensitivity to medication efficacy signal was observed only in smokers high in intrinsic quit motivation (i.e., already preparing to quit soon) and not smokers low in intrinsic quit motivation, even if monetarily reinforced for abstinence (i.e., given extrinsic motivation). A crossover procedure requiring less time and fewer subjects than formal trials may provide an efficient strategy for a go/no-go decision whether to advance to subsequent phase 2 randomized clinical trials with a novel drug. Future research is needed to replicate our results and evaluate this procedure with novel compounds, identify factors that may limit its utility, and evaluate its applicability to testing efficacy of compounds for treating other forms of addiction.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, A.; Tsiounis, Y.; Frankel, Y.

    Recently, there has been an interest in making electronic cash protocols more practical for electronic commerce by developing e-cash which is divisible (e.g., a coin which can be spent incrementally but total purchases are limited to the monetary value of the coin). In Crypto`95, T. Okamoto presented the first practical divisible, untraceable, off-line e-cash scheme, which requires only O(log N) computations for each of the withdrawal, payment and deposit procedures, where N = (total coin value)/(smallest divisible unit). However, Okamoto`s set-up procedure is quite inefficient (on the order of 4,000 multi-exponentiations and depending on the size of the RSA modulus).more » The authors formalize the notion of range-bounded commitment, originally used in Okamoto`s account establishment protocol, and present a very efficient instantiation which allows one to construct the first truly efficient divisible e-cash system. The scheme only requires the equivalent of one (1) exponentiation for set-up, less than 2 exponentiations for withdrawal and around 20 for payment, while the size of the coin remains about 300 Bytes. Hence, the withdrawal protocol is 3 orders of magnitude faster than Okamoto`s, while the rest of the system remains equally efficient, allowing for implementation in smart-cards. Similar to Okamoto`s, the scheme is based on proofs whose cryptographic security assumptions are theoretically clarified.« less

  4. Accurate, Streamlined Analysis of mRNA Translation by Sucrose Gradient Fractionation

    PubMed Central

    Aboulhouda, Soufiane; Di Santo, Rachael; Therizols, Gabriel; Weinberg, David

    2017-01-01

    The efficiency with which proteins are produced from mRNA molecules can vary widely across transcripts, cell types, and cellular states. Methods that accurately assay the translational efficiency of mRNAs are critical to gaining a mechanistic understanding of post-transcriptional gene regulation. One way to measure translational efficiency is to determine the number of ribosomes associated with an mRNA molecule, normalized to the length of the coding sequence. The primary method for this analysis of individual mRNAs is sucrose gradient fractionation, which physically separates mRNAs based on the number of bound ribosomes. Here, we describe a streamlined protocol for accurate analysis of mRNA association with ribosomes. Compared to previous protocols, our method incorporates internal controls and improved buffer conditions that together reduce artifacts caused by non-specific mRNA–ribosome interactions. Moreover, our direct-from-fraction qRT-PCR protocol eliminates the need for RNA purification from gradient fractions, which greatly reduces the amount of hands-on time required and facilitates parallel analysis of multiple conditions or gene targets. Additionally, no phenol waste is generated during the procedure. We initially developed the protocol to investigate the translationally repressed state of the HAC1 mRNA in S. cerevisiae, but we also detail adapted procedures for mammalian cell lines and tissues. PMID:29170751

  5. Circulating tumor cell detection: A direct comparison between negative and unbiased enrichment in lung cancer.

    PubMed

    Xu, Yan; Liu, Biao; Ding, Fengan; Zhou, Xiaodie; Tu, Pin; Yu, Bo; He, Yan; Huang, Peilin

    2017-06-01

    Circulating tumor cells (CTCs), isolated as a 'liquid biopsy', may provide important diagnostic and prognostic information. Therefore, rapid, reliable and unbiased detection of CTCs are required for routine clinical analyses. It was demonstrated that negative enrichment, an epithelial marker-independent technique for isolating CTCs, exhibits a better efficiency in the detection of CTCs compared with positive enrichment techniques that only use specific anti-epithelial cell adhesion molecules. However, negative enrichment techniques incur significant cell loss during the isolation procedure, and as it is a method that uses only one type of antibody, it is inherently biased. The detection procedure and identification of cell types also relies on skilled and experienced technicians. In the present study, the detection sensitivity of using negative enrichment and a previously described unbiased detection method was compared. The results revealed that unbiased detection methods may efficiently detect >90% of cancer cells in blood samples containing CTCs. By contrast, only 40-60% of CTCs were detected by negative enrichment. Additionally, CTCs were identified in >65% of patients with stage I/II lung cancer. This simple yet efficient approach may achieve a high level of sensitivity. It demonstrates a potential for the large-scale clinical implementation of CTC-based diagnostic and prognostic strategies.

  6. 48 CFR 223.405 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... OF DEFENSE SOCIOECONOMIC PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Use of Recovered Materials 223.405 Procedures...

  7. Uav and GIS Based Tool for Collection and Propagation of Seeds Material - First Results

    NASA Astrophysics Data System (ADS)

    Stereńczak, K.; Mroczek, P.; Jastrzębowski, S.; Krok, G.; Lisańczuk, M.; Klisz, M.; Kantorowicz, W.

    2016-06-01

    Seed management carried out by The State Forests National Forest Holding is an integral part of rational forest management. Seed collection takes place mainly from stands belonging to first category of forest reproductive material, which is the largest seed base in Poland. In smaller amount, seeds are collected in selective objects of highest forest reproductive material category (selected seed stands, seed orchards). The previous estimation methods of seed crop were based on visual assessment of cones in the stands for their harvest. Following the rules of FRM transfer is additional difficulty of rational seed management which limits the possibility of the use of planting material in Poland. Statements concerning forecast of seed crop and monitoring of seed quality is based on annual reports from the State Forest Service. Forest Research Institute is responsible for preparing and publishing above-mentioned statements. A small extent of its automatization and optimization is a large disadvantage of this procedure. In order to make this process more effective web-based GIS application was designed. Its main performance will give a possibility to upload present-day information on seed efficiency, their spatial pattern and availability. Currently this system is under preparation. As a result, the project team will get a possibility to increase participation of seed material collected from selected seed base and to share good practices on this issue in more efficient way. In the future this will make it possible to obtain greater genetic gain of selection strategy. Additionally, first results presented in literature showed possible use of unmanned aerial system/vehicle (UAS/V) for supporting of seed crop forecast procedure.

  8. Tolerance assignment in optical design

    NASA Astrophysics Data System (ADS)

    Youngworth, Richard Neil

    2002-09-01

    Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.

  9. Chapter 17 Sterile Plate-Based Vitrification of Adherent Human Pluripotent Stem Cells and Their Derivatives Using the TWIST Method.

    PubMed

    Neubauer, Julia C; Stracke, Frank; Zimmermann, Heiko

    2017-01-01

    Due to their high biological complexity, e.g., their close cell-to-cell contacts, cryopreservation of human pluripotent stem cells with standard slow-rate protocols often is inefficient and can hardly be standardized. Vitrification that means ultrafast freezing already showed very good viability and recovery rates for this sensitive cell system, but is only applicable for low cell numbers, bears a high risk of contamination, and can hardly be implemented under GxP regulations. In this chapter, a sterile plate-based vitrification method for adherent pluripotent stem cells and their derivatives is presented based on a procedure and device for human embryonic stem cells developed by Beier et al. (Cryobiology 66:8-16, 2013). This protocol overcomes the limitations of conventional vitrification procedures resulting in the highly efficient preservation of ready-to-use adherent pluripotent stem cells with the possibility of vitrifying cells in multi-well formats for direct application in high-throughput screenings.

  10. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  11. A safe, effective, and facility compatible cleaning in place procedure for affinity resin in large-scale monoclonal antibody purification.

    PubMed

    Wang, Lu; Dembecki, Jill; Jaffe, Neil E; O'Mara, Brian W; Cai, Hui; Sparks, Colleen N; Zhang, Jian; Laino, Sarah G; Russell, Reb J; Wang, Michelle

    2013-09-20

    Cleaning-in-place (CIP) for column chromatography plays an important role in therapeutic protein production. A robust and efficient CIP procedure ensures product quality, improves column life time and reduces the cost of the purification processes, particularly for those using expensive affinity resins, such as MabSelect protein A resin. Cleaning efficiency, resin compatibility, and facility compatibility are the three major aspects to consider in CIP process design. Cleaning MabSelect resin with 50mM sodium hydroxide (NaOH) along with 1M sodium chloride is one of the most popular cleaning procedures used in biopharmaceutical industries. However, high concentration sodium chloride is a leading cause of corrosion in the stainless steel containers used in large scale manufacture. Corroded containers may potentially introduce metal contaminants into purified drug products. Therefore, it is challenging to apply this cleaning procedure into commercial manufacturing due to facility compatibility and drug safety concerns. This paper reports a safe, effective and environmental and facility-friendly cleaning procedure that is suitable for large scale affinity chromatography. An alternative salt (sodium sulfate) is used to prevent the stainless steel corrosion caused by sodium chloride. Sodium hydroxide and salt concentrations were optimized using a high throughput screening approach to achieve the best combination of facility compatibility, cleaning efficiency and resin stability. Additionally, benzyl alcohol is applied to achieve more effective microbial control. Based on the findings, the recommended optimum cleaning strategy is cleaning MabSelect resin with 25 mM NaOH, 0.25 M Na2SO4 and 1% benzyl alcohol solution every cycle, followed by a more stringent cleaning using 50 mM NaOH with 0.25 M Na2SO4 and 1% benzyl alcohol at the end of each manufacturing campaign. A resin life cycle study using the MabSelect affinity resin demonstrates that the new cleaning strategy prolongs resin life time and consistently delivers high purity drug products. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Developing a Procedures Manual for Using the Innovacq 100 System Effectively and Efficiently.

    ERIC Educational Resources Information Center

    Cubberley, Carol

    The installation of an automated acquisitions system, the Innovacq 100, demanded a review and revision of procedures in the Acquisitions Department of the University of Central Florida Library. This practicum report describes the process involved in developing the new procedures. Prior to installation, the old departmental procedures manual and…

  13. Enhanced Electron Injection and Exciton Confinement for Pure Blue Quantum-Dot Light-Emitting Diodes by Introducing Partially Oxidized Aluminum Cathode.

    PubMed

    Wang, Zhibin; Cheng, Tai; Wang, Fuzhi; Bai, Yiming; Bian, Xingming; Zhang, Bing; Hayat, Tasawar; Alsaedi, Ahmed; Tan, Zhan'ao

    2018-05-31

    Stable and efficient red (R), green (G), and blue (B) light sources based on solution-processed quantum dots (QDs) play important roles in next-generation displays and solid-state lighting technologies. The brightness and efficiency of blue QDs-based light-emitting diodes (LEDs) remain inferior to their red and green counterparts, due to the inherently unfavorable energy levels of different colors of light. To solve these problems, a device structure should be designed to balance the injection holes and electrons into the emissive QD layer. Herein, through a simple autoxidation strategy, pure blue QD-LEDs which are highly bright and efficient are demonstrated, with a structure of ITO/PEDOT:PSS/Poly-TPD/QDs/Al:Al2O3. The autoxidized Al:Al2O3 cathode can effectively balance the injected charges and enhance radiative recombination without introducing an additional electron transport layer (ETL). As a result, high color-saturated blue QD-LEDs are achieved with a maximum luminance over 13,000 cd m -2 , and a maximum current efficiency of 1.15 cd A -1 . The easily controlled autoxidation procedure paves the way for achieving high-performance blue QD-LEDs.

  14. Evaluation of Hardware and Procedures for Astronaut Assembly and Repair of Large Precision Reflectors

    NASA Technical Reports Server (NTRS)

    Lake, Mark S.; Heard, Walter L., Jr.; Watson, Judith J.; Collins, Timothy J.

    2000-01-01

    A detailed procedure is presented that enables astronauts in extravehicular activity (EVA) to efficiently assemble and repair large (i.e., greater than 10m-diameter) segmented reflectors, supported by a truss, for space-based optical or radio-frequency science instruments. The procedure, estimated timelines, and reflector hardware performance are verified in simulated 0-g (neutral buoyancy) assembly tests of a 14m-diameter, offset-focus, reflector test article. The test article includes a near-flight-quality, 315-member, doubly curved support truss and 7 mockup reflector panels (roughly 2m in diameter) representing a portion of the 37 total panels needed to fully populate the reflector. Data from the tests indicate that a flight version of the design (including all reflector panels) could be assembled in less than 5 hours - less than the 6 hours normally permitted for a single EVA. This assembly rate essentially matches pre-test predictions that were based on a vast amount of historical data on EVA assembly of structures produced by NASA Langley Research Center. Furthermore, procedures and a tool for the removal and replacement of a damaged reflector panel were evaluated, and it was shown that EVA repair of this type of reflector is feasible with the use of appropriate EVA crew aids.

  15. Addressing Loss of Efficiency Due to Misclassification Error in Enriched Clinical Trials for the Evaluation of Targeted Therapies Based on the Cox Proportional Hazards Model.

    PubMed

    Tsai, Chen-An; Lee, Kuan-Ting; Liu, Jen-Pei

    2016-01-01

    A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example.

  16. Optimum structural sizing of conventional cantilever and joined wing configurations using equivalent beam models

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Chen, J. L.

    1986-01-01

    The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.

  17. Development and evaluation of a prototype in-flight instrument flight rules (IFR) procedures trainer

    NASA Technical Reports Server (NTRS)

    Aaron, J. B., Jr.; Morris, G. G.

    1981-01-01

    An in-flight instrument flight rules (IFR) procedures trainer capable of providing simulated indications of instrument flight in a typical general aviation aircraft independent of ground based navigation aids was developed. The IFR navaid related instruments and circuits from an ATC 610J table top simulator were installed in a Cessna 172 aircraft and connected to its electrical power and pitot static systems. The benefits expected from this hybridization concept include increased safety by reducing the number of general aviation aircraft conducting IFR training flights in congested terminal areas, and reduced fuel use and instruction costs by lessening the need to fly to and from navaid equipped airports and by increased efficiency of the required in-flight training. Technical feasibility was demonstrated and the operational feasibility of the concept was evaluated. Results indicated that the in-flight simulator is an effective training device for teaching IFR procedural skills.

  18. Self-attitude awareness training: An aid to effective performance in microgravity and virtual environments

    NASA Technical Reports Server (NTRS)

    Parker, Donald E.; Harm, D. L.; Florer, Faith L.

    1993-01-01

    This paper describes ongoing development of training procedures to enhance self-attitude awareness in astronaut trainees. The procedures are based on observations regarding self-attitude (perceived self-orientation and self-motion) reported by astronauts. Self-attitude awareness training is implemented on a personal computer system and consists of lesson stacks programmed using Hypertalk with Macromind Director movie imports. Training evaluation will be accomplished by an active search task using the virtual Spacelab environment produced by the Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME-PAT) as well as by assessment of astronauts' performance and sense of well-being during orbital flight. The general purpose of self-attitude awareness training is to use as efficiently as possible the limited DOME-PAT training time available to astronauts prior to a space mission. We suggest that similar training procedures may enhance the performance of virtual environment operators.

  19. ARTHROSCOPIC TREATMENT OF ACROMIOCLAVICULAR JOINT DISLOCATION BY TIGHT ROPE TECHNIQUE (ARTHREX®)

    PubMed Central

    GÓmez Vieira, Luis Alfredo; Visco, Adalberto; Daneu Fernandes, Luis Filipe; GÓmez Cordero, Nicolas Gerardo

    2015-01-01

    Presenting the arthroscopic treatment by Tight Rope - Arthrex® system for acute acromioclavicular dislocation and to evaluate results obtained with this procedure. Methods: Between August 2006 and May 2007, 10 shoulders of 10 patients with acute acromioclavicular dislocation were submitted to arthroscopic repair using the Tight Rope - Arthrex® system. Minimum follow-up was 12 months, with a mean of 15 months. Age ranged from 26 to 42, mean 34 years. All patients were male. Radiology evaluation was made by trauma series x-ray. The patients were assisted in the first month weekly and after three months after the procedure. Clinical evaluation was based on the University of California at Los Angeles (UCLA) criteria. Results: All patients were satisfied after the arthroscopic procedure and the mean UCLA score was 32,5. Conclusion: The arthroscopic treatment by Tight Rope – Arthrex® system for acute acromioclavicular dislocation showed to be an efficient technique. PMID:26998453

  20. From serological to computer cross-matching in nine hospitals.

    PubMed

    Georgsen, J; Kristensen, T

    1998-01-01

    In 1991 it was decided to reorganise the transfusion service of the County of Funen. The aims were to standardise and improve the quality of blood components, laboratory procedures and the transfusion service and to reduce the number of outdated blood units. Part of the efficiency gains was reinvested in a dedicated computer system making it possible--among other things--to change the cross-match procedures from serological to computer cross-matching according to the ABCD-concept. This communication describes how this transition was performed in terms of laboratory techniques, education of personnel as well as implementation of the computer system and indicates the results obtained. The Funen Transfusion Service has by now performed more than 100.000 red cell transfusions based on ABCD-cross-matching and has not encountered any problems. Major results are the significant reductions of cross-match procedures, blood grouping as well as the number of outdated blood components.

  1. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  2. Numerical simulation of the vortical flow around a pitching airfoil

    NASA Astrophysics Data System (ADS)

    Fu, Xiang; Li, Gaohua; Wang, Fuxin

    2017-04-01

    In order to study the dynamic behaviors of the flapping wing, the vortical flow around a pitching NACA0012 airfoil is investigated. The unsteady flow field is obtained by a very efficient zonal procedure based on the velocity-vorticity formulation and the Reynolds number based on the chord length of the airfoil is set to 1 million. The zonal procedure divides up the whole computation domain in to three zones: potential flow zone, boundary layer zone and Navier-Stokes zone. Since the vorticity is absent in the potential flow zone, the vorticity transport equation needs only to be solved in the boundary layer zone and Navier-Stokes zone. Moreover, the boundary layer equations are solved in the boundary layer zone. This arrangement drastically reduces the computation time against the traditional numerical method. After the flow field computation, the evolution of the vortices around the airfoil is analyzed in detail.

  3. Automated Clean Chemistry for Bulk Analysis of Environmental Swipe Samples - FY17 Year End Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ticknor, Brian W.; Metzger, Shalina C.; McBay, Eddy H.

    Sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment to shorten lengthy and costly manual chemical purification procedures. This development addresses a serious need in the International Atomic Energy Agency’s Network of Analytical Laboratories (IAEA NWAL) to increase efficiency in the Bulk Analysis of Environmental Samples for Safeguards program with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on COTS equipment. It was modified for uranium/plutonium separations using renewable columns packed with Eichrom TEVA and UTEVA resins, with a chemical separation method based on the Oakmore » Ridge National Laboratory (ORNL) NWAL chemical procedure. The newly designed prepFAST-SR has had several upgrades compared with the original prepFAST-MC2. Both systems are currently installed in the Ultra-Trace Forensics Science Center at ORNL.« less

  4. Development of the manufacture of billets based on high-strength aluminum alloys

    NASA Astrophysics Data System (ADS)

    Korostelev, V. F.; Denisov, M. S.; Bol'shakov, A. E.; Van Khieu, Chan

    2017-09-01

    When pressure is applied upon casting as a factor of external impact on melt, the problems related mainly to filling of molds are solved; however, some casting defects cannot be avoided. The experimental results demonstrate that complete compensation of shrinkage under pressure can be achieved by compressing of casting by 8-10% prior to beginning of solidification and by 2-3% during the transition of a metal from the liquid to the solid state. It is mentioned that the procedure based on compressing a liquid metal can be efficiently applied for manufacture of high-strength aluminum alloy castings. The selection of engineering parameters is substantiated. Examples of castings made of V95 alloy according to the developed procedure are given. In addition, the article discusses the problems related to designing of engineering and special-purpose equipment, software, and control automation.

  5. Procedures for woody vegetation surveys in the Kazgail rural council area, Kordofan, Sudan

    USGS Publications Warehouse

    Falconer, Allan; Cross, Matthew D.; Orr, Donald G.

    1990-01-01

    Efforts to reforest parts of the Kordofan Province of Sudan are receiving support from international development agencies. These efforts include planning and implementing reforestation activities that require the collection of natural resources and socioeconomic data, and the preparation of base maps. A combination of remote sensing, geographic information system and global positioning systems procedures are used in this study to meet these requirements.Remote sensing techniques were used to provide base maps and to guide the compilation of vegetation resources maps. These techniques provided a rapid and efficient method for documenting available resources. Pocket‐sized global positioning system units were used to establish the location of field data collected for mapping and resource analysis. A microcomputer data management system tabulated and displayed the field data. The resulting system for data analysis, management, and planning has been adopted for the mapping and inventory of the Gum Belt of Sudan.

  6. PAPR reduction based on tone reservation scheme for DCO-OFDM indoor visible light communications.

    PubMed

    Bai, Jurong; Li, Yong; Yi, Yang; Cheng, Wei; Du, Huimin

    2017-10-02

    High peak-to-average power ratio (PAPR) leads to out-of-band power and in-band distortion in the direct current-biased optical orthogonal frequency division multiplexing (DCO-OFDM) systems. In order to effectively reduce the PAPR with faster convergence and lower complexity, this paper proposes a tone reservation based scheme, which is the combination of the signal-to-clipping noise ratio (SCR) procedure and the least squares approximation (LSA) procedure. In the proposed scheme, the transmitter of the DCO-OFDM indoor visible light communication (VLC) system is designed to transform the PAPR reduced signal into real-valued positive OFDM signal without doubling the transmission bandwidth. Moreover, the communication distance and the light emitting diode (LED) irradiance angle are taking into consideration in the evaluation of the system bit error rate (BER). The PAPR reduction efficiency of the proposed scheme is remarkable for DCO-OFDM indoor VLC systems.

  7. Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui

    2015-06-01

    Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less

  8. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  9. Dynamic metabolic modeling for a MAB bioprocess.

    PubMed

    Gao, Jianying; Gorenflo, Volker M; Scharer, Jeno M; Budman, Hector M

    2007-01-01

    Production of monoclonal antibodies (MAb) for diagnostic or therapeutic applications has become an important task in the pharmaceutical industry. The efficiency of high-density reactor systems can be potentially increased by model-based design and control strategies. Therefore, a reliable kinetic model for cell metabolism is required. A systematic procedure based on metabolic modeling is used to model nutrient uptake and key product formation in a MAb bioprocess during both the growth and post-growth phases. The approach combines the key advantages of stoichiometric and kinetic models into a complete metabolic network while integrating the regulation and control of cellular activity. This modeling procedure can be easily applied to any cell line during both the cell growth and post-growth phases. Quadratic programming (QP) has been identified as a suitable method to solve the underdetermined constrained problem related to model parameter identification. The approach is illustrated for the case of murine hybridoma cells cultivated in stirred spinners.

  10. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.

  11. Progress in integrated-circuit horn antennas for receiver applications. Part 1: Antenna design

    NASA Technical Reports Server (NTRS)

    Eleftheriades, George V.; Ali-Ahmad, Walid Y.; Rebeiz, Gabriel M.

    1992-01-01

    The purpose of this work is to present a systematic method for the design of multimode quasi-integrated horn antennas. The design methodology is based on the Gaussian beam approach and the structures are optimized for achieving maximum fundamental Gaussian coupling efficiency. For this purpose, a hybrid technique is employed in which the integrated part of the antennas is treated using full-wave analysis, whereas the machined part is treated using an approximate method. This results in a simple and efficient design process. The developed design procedure has been applied for the design of a 20, a 23, and a 25 dB quasi-integrated horn antennas, all with a Gaussian coupling efficiency exceeding 97 percent. The designed antennas have been tested and characterized using both full-wave analysis and 90 GHz/370 GHz measurements.

  12. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  13. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach

    PubMed Central

    Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab

    2018-01-01

    Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892

  14. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.

    PubMed

    Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab

    2018-01-16

    Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.

  15. High-throughput process development of an alternative platform for the production of virus-like particles in Escherichia coli.

    PubMed

    Ladd Effio, Christopher; Baumann, Pascal; Weigel, Claudia; Vormittag, Philipp; Middelberg, Anton; Hubbuch, Jürgen

    2016-02-10

    The production of safe vaccines against untreatable or new diseases has pushed the research in the field of virus-like particles (VLPs). Currently, a large number of commercial VLP-based human vaccines and vaccine candidates are available or under development. A promising VLP production route is the controlled in vitro assembly of virus proteins into capsids. In the study reported here, a high-throughput screening (HTS) procedure was implemented for the upstream process development of a VLP platform in bacterial cell systems. Miniaturized cultivations were carried out in 48-well format in the BioLector system (m2p-Labs, Germany) using an Escherichia coli strain with a tac promoter producing the murine polyomavirus capsid protein (VP1). The screening procedure incorporated micro-scale cultivations, HTS cell disruption by sonication and HTS-compatible analytics by capillary gel electrophoresis. Cultivation temperatures, shaking speeds, induction and medium conditions were varied to optimize the product expression in E. coli. The most efficient system was selected based on an evaluation of soluble and insoluble product concentrations as well as on the percentage of product in the total soluble protein fraction. The optimized system was scaled up to cultivation 2.5L shaker flask scale and purified using an anion exchange chromatography membrane adsorber, followed by a size exclusion chromatography polishing procedure. For proof of concept, purified VP1 capsomeres were assembled under defined buffer conditions into empty capsids and characterized using transmission electron microscopy (TEM). The presented HTS procedure allowed for a fast development of an efficient production process of VLPs in E. coli. Under optimized cultivation conditions, the VP1 product totalled up to 43% of the total soluble protein fraction, yielding 1.63 mg VP1 per mL of applied cultivation medium. The developed production process strongly promotes the murine polyoma-VLP platform, moving towards an industrially feasible technology for new chimeric vaccines. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Energy Saving Melting and Revert Reduction Technology: Melting Efficiency in Die Casting Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Schwam

    2012-12-15

    This project addressed multiple aspects of the aluminum melting and handling in die casting operations, with the objective of increasing the energy efficiency while improving the quality of the molten metal. The efficiency of melting has always played an important role in the profitability of aluminum die casting operations. Consequently, die casters need to make careful choices in selecting and operating melting equipment and procedures. The capital cost of new melting equipment with higher efficiency can sometimes be recovered relatively fast when it replaces old melting equipment with lower efficiency. Upgrades designed to improve energy efficiency of existing equipment maymore » be well justified. Energy efficiency is however not the only factor in optimizing melting operations. Melt losses and metal quality are also very important. Selection of melting equipment has to take into consideration the specific conditions at the die casting shop such as availability of floor space, average quantity of metal used as well as the ability to supply more metal during peaks in demand. In all these cases, it is essential to make informed decisions based on the best available data.« less

  17. The effects of hoechst 33342 staining and the male sample donor on the sorting efficiency of canine spermatozoa.

    PubMed

    Rodenas, C; Lucas, X; Tarantini, T; Del Olmo, D; Roca, J; Vazquez, J M; Martinez, E A; Parrilla, I

    2014-02-01

    The aim of this study was to evaluate the influence of Hoechst 33342 (H-42) concentration and of the male donor on the efficiency of sex-sorting procedure in canine spermatozoa. Semen samples from six dogs (three ejaculates/dog) were diluted to 100 × 10(6) sperm/ml, split into four aliquots, stained with increasing H-42 concentrations (5, 7.5, 10 and 12.5 μl, respectively) and sorted by flow cytometry. The rates of non-viable (FDA+), oriented (OS) and selected spermatozoa (SS), as well as the average sorting rates (SR, sorted spermatozoa/s), were used to determine the sorting efficiency. The effects of the sorting procedure on the quality of sorted spermatozoa were evaluated in terms of total motility (TM), percentage of viable spermatozoa (spermatozoa with membrane and acrosomal integrity) and percentage of spermatozoa with reacted/damaged acrosomes. X- and Y-chromosome-bearing sperm populations were identified in all of the samples stained with 7.5, 10 and 12.5 μl of H-42, while these two populations were only identified in 77.5% of samples stained with 5 μl. The values of OS, SS and SR were influenced by the male donor (p < 0.01) but not by the H-42 concentration used. The quality of sorted sperm samples immediately after sorting was similar to that of fresh samples, while centrifugation resulted in significant reduction (p < 0.05) in TM and in the percentage of viable spermatozoa and a significant increase (p < 0.01) in the percentage of spermatozoa with damage/reacted acrosomes. In conclusion, the sex-sorting of canine spermatozoa by flow cytometry can be performed successfully using H-42 concentrations between 7.5 and 12.5 μl. The efficiency of the sorting procedure varies based on the dog from which the sperm sample derives. © 2013 Blackwell Verlag GmbH.

  18. Coupling of metal-organic frameworks-containing monolithic capillary-based selective enrichment with matrix-assisted laser desorption ionization-time-of-flight mass spectrometry for efficient analysis of protein phosphorylation.

    PubMed

    Li, Daojin; Yin, Danyang; Chen, Yang; Liu, Zhen

    2017-05-19

    Protein phosphorylation is a major post-translational modification, which plays a vital role in cellular signaling of numerous biological processes. Mass spectrometry (MS) has been an essential tool for the analysis of protein phosphorylation, for which it is a key step to selectively enrich phosphopeptides from complex biological samples. In this study, metal-organic frameworks (MOFs)-based monolithic capillary has been successfully prepared as an effective sorbent for the selective enrichment of phosphopeptides and has been off-line coupled with matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) for efficient analysis of phosphopeptides. Using š-casein as a representative phosphoprotein, efficient phosphorylation analysis by this off-line platform was verified. Phosphorylation analysis of a nonfat milk sample was also demonstrated. Through introducing large surface areas and highly ordered pores of MOFs into monolithic column, the MOFs-based monolithic capillary exhibited several significant advantages, such as excellent selectivity toward phosphopeptides, superb tolerance to interference and simple operation procedure. Because of these highly desirable properties, the MOFs-based monolithic capillary could be a useful tool for protein phosphorylation analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Qpais: A Web-Based Expert System for Assistedidentification of Quarantine Stored Insect Pests

    NASA Astrophysics Data System (ADS)

    Huang, Han; Rajotte, Edwin G.; Li, Zhihong; Chen, Ke; Zhang, Shengfang

    Stored insect pests can seriously depredate stored products causing worldwide economic losses. Pests enter countries traveling with transported goods. Inspection and Quarantine activities are essential to prevent the invasion and spread of pests. Identification of quarantine stored insect pests is an important component of the China's Inspection and Quarantine procedure, and it is necessary not only to identify whether the species captured is an invasive species, but determine control procedures for stored insect pests. With the development of information technologies, many expert systems that aid in the identification of agricultural pests have been developed. Expert systems for the identification of quarantine stored insect pests are rare and are mainly developed for stand-alone PCs. This paper describes the development of a web-based expert system for identification of quarantine stored insect pests as part of the China 11th Five-Year National Scientific and Technological Support Project (115 Project). Based on user needs, textual knowledge and images were gathered from the literature and expert interviews. ASP.NET, C# and SQL language were used to program the system. Improvement of identification efficiency and flexibility was achieved using a new inference method called characteristic-select-based spatial distance method. The expert system can assist identifying 150 species of quarantine stored insect pests and provide detailed information for each species. The expert system has also been evaluated using two steps: system testing and identification testing. With a 85% rate of correct identification and high efficiency, the system evaluation shows that this expert system can be used in identification work of quarantine stored insect pests.

  20. A typology of specialists' clinical roles.

    PubMed

    Forrest, Christopher B

    2009-06-08

    High use of specialist physicians and specialized procedures coupled with low exposure to primary care are distinguishing traits of the US health care system. Although the tasks of the primary care medical home are well established, consensus on the normative clinical roles of specialist physicians has not been achieved, which makes it unlikely that the specialist workforce is being used most effectively and efficiently. This article describes a typology of specialists' clinical roles that is based on the conceptual basis for health care specialism and empirical evaluations of the specialty referral process. The report concludes with a discussion on the implications of the typology for improving the effectiveness and efficiency of the primary-specialty care interface.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron

    Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less

  2. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  3. Operating room data management: improving efficiency and safety in a surgical block.

    PubMed

    Agnoletti, Vanni; Buccioli, Matteo; Padovani, Emanuele; Corso, Ruggero M; Perger, Peter; Piraccini, Emanuele; Orelli, Rebecca Levy; Maitan, Stefano; Dell'amore, Davide; Garcea, Domenico; Vicini, Claudio; Montella, Teresa Maria; Gambale, Giorgio

    2013-03-11

    European Healthcare Systems are facing a difficult period characterized by increasing costs and spending cuts due to economic problems. There is the urgent need for new tools which sustain Hospitals decision makers work. This project aimed to develop a data recording system of the surgical process of every patient within the operating theatre. The primary goal was to create a practical and easy data processing tool to give hospital managers, anesthesiologists and surgeons the information basis to increase operating theaters efficiency and patient safety. The developed data analysis tool is embedded in an Oracle Business Intelligence Environment, which processes data to simple and understandable performance tachometers and tables. The underlying data analysis is based on scientific literature and the projects teams experience with tracked data. The system login is layered and different users have access to different data outputs depending on their professional needs. The system is divided in the tree profile types Manager, Anesthesiologist and Surgeon. Every profile includes subcategories where operators can access more detailed data analyses. The first data output screen shows general information and guides the user towards more detailed data analysis. The data recording system enabled the registration of 14.675 surgical operations performed from 2009 to 2011. Raw utilization increased from 44% in 2009 to 52% in 2011. The number of high complexity surgical procedures (≥120 minutes) has increased in certain units while decreased in others. The number of unscheduled procedures performed has been reduced (from 25% in 2009 to 14% in 2011) while maintaining the same percentage of surgical procedures. The number of overtime events decreased in 2010 (23%) and in 2011 (21%) compared to 2009 (28%) and the delays expressed in minutes are almost the same (mean 78 min). The direct link found between the complexity of surgical procedures, the number of unscheduled procedures and overtime show a positive impact of the project on OR management. Despite a consistency in the complexity of procedures (19% in 2009 and 21% in 2011), surgical groups have been successful in reducing the number of unscheduled procedures (from 25% in 2009 to 14% in 2011) and overtime (from 28% in 2009 to 21% in 2011). The developed project gives healthcare managers, anesthesiologists and surgeons useful information to increase surgical theaters efficiency and patient safety. In difficult economic times is possible to develop something that is of some value to the patient and healthcare system too.

  4. Operating room data management: improving efficiency and safety in a surgical block

    PubMed Central

    2013-01-01

    Background European Healthcare Systems are facing a difficult period characterized by increasing costs and spending cuts due to economic problems. There is the urgent need for new tools which sustain Hospitals decision makers work. This project aimed to develop a data recording system of the surgical process of every patient within the operating theatre. The primary goal was to create a practical and easy data processing tool to give hospital managers, anesthesiologists and surgeons the information basis to increase operating theaters efficiency and patient safety. Methods The developed data analysis tool is embedded in an Oracle Business Intelligence Environment, which processes data to simple and understandable performance tachometers and tables. The underlying data analysis is based on scientific literature and the projects teams experience with tracked data. The system login is layered and different users have access to different data outputs depending on their professional needs. The system is divided in the tree profile types Manager, Anesthesiologist and Surgeon. Every profile includes subcategories where operators can access more detailed data analyses. The first data output screen shows general information and guides the user towards more detailed data analysis. The data recording system enabled the registration of 14.675 surgical operations performed from 2009 to 2011. Results Raw utilization increased from 44% in 2009 to 52% in 2011. The number of high complexity surgical procedures (≥120 minutes) has increased in certain units while decreased in others. The number of unscheduled procedures performed has been reduced (from 25% in 2009 to 14% in 2011) while maintaining the same percentage of surgical procedures. The number of overtime events decreased in 2010 (23%) and in 2011 (21%) compared to 2009 (28%) and the delays expressed in minutes are almost the same (mean 78 min). The direct link found between the complexity of surgical procedures, the number of unscheduled procedures and overtime show a positive impact of the project on OR management. Despite a consistency in the complexity of procedures (19% in 2009 and 21% in 2011), surgical groups have been successful in reducing the number of unscheduled procedures (from 25% in 2009 to 14% in 2011) and overtime (from 28% in 2009 to 21% in 2011). Conclusions The developed project gives healthcare managers, anesthesiologists and surgeons useful information to increase surgical theaters efficiency and patient safety. In difficult economic times is possible to develop something that is of some value to the patient and healthcare system too. PMID:23496977

  5. Numerical study of a novel procedure for installing the tower and Rotor Nacelle Assembly of offshore wind turbines based on the inverted pendulum principle

    NASA Astrophysics Data System (ADS)

    Guachamin Acero, Wilson; Gao, Zhen; Moan, Torgeir

    2017-09-01

    Current installation costs of offshore wind turbines (OWTs) are high and profit margins in the offshore wind energy sector are low, it is thus necessary to develop installation methods that are more efficient and practical. This paper presents a numerical study (based on a global response analysis of marine operations) of a novel procedure for installing the tower and Rotor Nacelle Assemblies (RNAs) on bottom-fixed foundations of OWTs. The installation procedure is based on the inverted pendulum principle. A cargo barge is used to transport the OWT assembly in a horizontal position to the site, and a medium-size Heavy Lift Vessel (HLV) is then employed to lift and up-end the OWT assembly using a special upending frame. The main advantage of this novel procedure is that the need for a huge HLV (in terms of lifting height and capacity) is eliminated. This novel method requires that the cargo barge is in the leeward side of the HLV (which can be positioned with the best heading) during the entire installation. This is to benefit from shielding effects of the HLV on the motions of the cargo barge, so the foundations need to be installed with a specific heading based on wave direction statistics of the site and a typical installation season. Following a systematic approach based on numerical simulations of actual operations, potential critical installation activities, corresponding critical events, and limiting (response) parameters are identified. In addition, operational limits for some of the limiting parameters are established in terms of allowable limits of sea states. Following a preliminary assessment of these operational limits, the duration of the entire operation, the equipment used, and weather- and water depth-sensitivity, this novel procedure is demonstrated to be viable.

  6. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  7. A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.

    1989-01-01

    A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.

  8. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  9. Mass spectrometry compatible surfactant for optimized in-gel protein digestion.

    PubMed

    Saveliev, Sergei V; Woodroofe, Carolyn C; Sabat, Grzegorz; Adams, Christopher M; Klaubert, Dieter; Wood, Keith; Urh, Marjeta

    2013-01-15

    Identification of proteins resolved by SDS-PAGE depends on robust in-gel protein digestion and efficient peptide extraction, requirements that are often difficult to achieve. A lengthy and laborious procedure is an additional challenge of protein identification in gel. We show here that with the use of the mass spectrometry compatible surfactant sodium 3-((1-(furan-2-yl)undecyloxy)carbonylamino)propane-1-sulfonate, the challenges of in-gel protein digestion are effectively addressed. Peptide quantitation based on stable isotope labeling showed that the surfactant induced 1.5-2 fold increase in peptide recovery. Consequently, protein sequence coverage was increased by 20-30%, on average, and the number of identified proteins saw a substantial boost. The surfactant also accelerated the digestion process. Maximal in-gel digestion was achieved in as little as one hour, depending on incubation temperature, and peptides were readily recovered from gel eliminating the need for postdigestion extraction. This study shows that the surfactant provides an efficient means of improving protein identification in gel and streamlining the in-gel digestion procedure requiring no extra handling steps or special equipment.

  10. Metallophytes for organic synthesis: towards new bio-based selective protection/deprotection procedures.

    PubMed

    Grison, Claire M; Velati, Alicia; Escande, Vincent; Grison, Claude

    2015-04-01

    We propose for the first time using metal hyperaccumulating plants for the construction of a repertoire of protection and deprotection conditions in a concept of orthogonal sets. Protection of alcohol, carbonyl, carboxyl, and amino groups are considered. The ecocatalysts derived from metal-rich plants allow selective, mild, eco-friendly, and efficient protection or deprotection reactions. The selectivity is controlled by the choice of the metal, which is hyperaccumulated by the metallophyte.

  11. Optimal generalized multistep integration formulae for real-time digital simulation

    NASA Technical Reports Server (NTRS)

    Moerder, D. D.; Halyo, N.

    1985-01-01

    The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.

  12. The Department of Defense: Reducing Its Reliance on Fossil-Based Aviation Fuel - Issues for Congress

    DTIC Science & Technology

    2007-06-15

    19 Figure 2. KC-135 Winglet Flight Tests at Dryden Flight Research Center . . . . 23 List of Tables Table 1...involving two or more opposing forces using rules, data, and procedures designed to depict an actual or assumed real life situation.” 19 Winglets , for...applying winglets to DOD aircraft. See page 24 of this report for further information. reflect the DOD’s true fuel costs, masks energy efficiency

  13. The load shedding advisor: An example of a crisis-response expert system

    NASA Technical Reports Server (NTRS)

    Bollinger, Terry B.; Lightner, Eric; Laverty, John; Ambrose, Edward

    1987-01-01

    A Prolog-based prototype expert system is described that was implemented by the Network Operations Branch of the NASA Goddard Space Flight Center. The purpose of the prototype was to test whether a small, inexpensive computer system could be used to host a load shedding advisor, a system which would monitor major physical environment parameters in a computer facility, then recommend appropriate operator reponses whenever a serious condition was detected. The resulting prototype performed significantly to efficiency gains achieved by replacing a purely rule-based design methodology with a hybrid approach that combined procedural, entity-relationship, and rule-based methods.

  14. Palladium-Catalyzed Dehydrogenative Coupling: An Efficient Synthetic Strategy for the Construction of the Quinoline Core

    PubMed Central

    Carral-Menoyo, Asier; Ortiz-de-Elguea, Verónica; Martinez-Nunes, Mikel; Sotomayor, Nuria; Lete, Esther

    2017-01-01

    Palladium-catalyzed dehydrogenative coupling is an efficient synthetic strategy for the construction of quinoline scaffolds, a privileged structure and prevalent motif in many natural and biologically active products, in particular in marine alkaloids. Thus, quinolines and 1,2-dihydroquinolines can be selectively obtained in moderate-to-good yields via intramolecular C–H alkenylation reactions, by choosing the reaction conditions. This methodology provides a direct method for the construction of this type of quinoline through an efficient and atom economical procedure, and constitutes significant advance over the existing procedures that require preactivated reaction partners. PMID:28867803

  15. [The purpose of clinical laboratory accreditation in transplantation medicine].

    PubMed

    Flegar-Mestrić, Zlata; Nazor, Aida; Perkov, Sonja; Surina, Branka; Siftar, Zoran; Ozvald, Ivan; Vidas, Zeljko

    2011-09-01

    Although transplantation of solid organs has become a more standardized method of treatment, liver transplantation represents an exceptional multidisciplinary clinical procedure requiring understanding of specific pathophysiological changes that occur in the end stage of liver disease. Liver transplantation has been performed at Merkur University Hospital since 1998, with 360 transplantations performed to date. The most common indications are alcohol liver disease, cirrhosis caused by hepatitis B and C virus, hepatocellular carcinoma and cryptogenetic liver cirrhosis. Laboratory tests required for liver transplantation are performed at Department of Clinical Chemistry, Merkur University Hospital, accredited according to ISO 15189 in 2007 for the areas of clinical chemistry, laboratory hematology and coagulation, laboratory immunology-cell immunophenotyping, and molecular diagnosis. The complexity of liver transplant patients requires constant interaction between the anesthesiologist team and clinical laboratory, which has to ensure fast and efficient intraoperative monitoring of biochemical and liver profile: electrolytes and acid-base status, complete blood count, coagulation profile and monitoring of graft function according to the individual patient's health status. Dynamics of intraoperative changes is measured in whole arterial blood samples on a Nova Biomedical Stat Profile Critical Care Xpress mobile acid-base analyzer. Frequent monitoring of ionized calcium and magnesium levels is very important because of citrated blood transfusion and for appropriate therapeutic procedure. During anhepatic stage, there is a progressive increase in lactate level concentration. After reperfusion, a rapid increase in lactate clearance is an excellent indicator of stable graft initial function and its adequate size. During the transplantation procedure, there is usually a biphasic acid-base disturbance characterized by metabolic acidosis and then by metabolic alkalosis. The loss of base equivalents starts during the dissection stage and accelerates during the anhepatic stage. Fast and efficient intraoperative monitoring of hematological tests and coagulation status is of great help in detecting the cause of possible hemorrhage and consequential complications during transplantation procedure. The possibility of organ and tissue transplantation mostly depends on well regulated international cooperation in the areas of donating, transplanting and exchange of required organs and tissues, while laboratory test results must be comparable regardless of their geographical area, methodology employed or analytical equipment used, which is mainly warranted through accreditation according to the international ISO 15189 standard.

  16. Quantum wavepacket ab initio molecular dynamics: an approach for computing dynamically averaged vibrational spectra including critical nuclear quantum effects.

    PubMed

    Sumner, Isaiah; Iyengar, Srinivasan S

    2007-10-18

    We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.

  17. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  18. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  19. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  20. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

Top