Sample records for requires multiple steps

  1. Validity of the Instrumented Push and Release Test to Quantify Postural Responses in Persons With Multiple Sclerosis.

    PubMed

    El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M

    2017-07-01

    To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Rational reduction of periodic propagators for off-period observations.

    PubMed

    Blanton, Wyndham B; Logan, John W; Pines, Alexander

    2004-02-01

    Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.

  3. Recoded and nonrecoded trinary signed-digit adders and multipliers with redundant-bit representations

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.; Alam, Mohammed S.

    1998-07-01

    Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders subtracters are presented on the basis of redundant-bit representation for the operands digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders multipliers; consequently, efficient use of all available adders can be made.

  4. Recoded and nonrecoded trinary signed-digit adders and multipliers with redundant-bit representations.

    PubMed

    Cherri, A K; Alam, M S

    1998-07-10

    Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders-subtracters are presented on the basis of redundant-bit representation for the operands' digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders-multipliers; consequently, efficient use of all available adders can be made.

  5. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  6. Experimental studies of systematic multiple-energy operation at HIMAC synchrotron

    NASA Astrophysics Data System (ADS)

    Mizushima, K.; Katagiri, K.; Iwata, Y.; Furukawa, T.; Fujimoto, T.; Sato, S.; Hara, Y.; Shirai, T.; Noda, K.

    2014-07-01

    Multiple-energy synchrotron operation providing carbon-ion beams with various energies has been used for scanned particle therapy at NIRS. An energy range from 430 to 56 MeV/u and about 200 steps within this range are required to vary the Bragg peak position for effective treatment. The treatment also demands the slow extraction of beam with highly reliable properties, such as spill, position and size, for all energies. We propose an approach to generating multiple-energy operation meeting these requirements within a short time. In this approach, the device settings at most energy steps are determined without manual adjustments by using systematic parameter tuning depending on the beam energy. Experimental verification was carried out at the HIMAC synchrotron, and its results proved that this approach can greatly reduce the adjustment period.

  7. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  8. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  9. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  10. LMSS communication network design

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The architecture of the telecommunication network as the first step in the design of the LMSS system is described. A set of functional requirements including the total number of users to be served by the LMSS are hypothesized. The design parameters are then defined at length and are systematically selected such that the resultant system is capable of serving the hypothesized number of users. The design of the backhaul link is presented. The number of multiple backhaul beams required for communication to the base stations is determined. A conceptual procedure for call-routing and locating a mobile subscriber within the LMSS network is presented. The various steps in placing a call are explained, and the relationship between the two sets of UHF and S-band multiple beams is developed. A summary of the design parameters is presented.

  11. Privacy Protection on Multiple Sensitive Attributes

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Ye, Xiaojun

    In recent years, a privacy model called k-anonymity has gained popularity in the microdata releasing. As the microdata may contain multiple sensitive attributes about an individual, the protection of multiple sensitive attributes has become an important problem. Different from the existing models of single sensitive attribute, extra associations among multiple sensitive attributes should be invested. Two kinds of disclosure scenarios may happen because of logical associations. The Q&S Diversity is checked to prevent the foregoing disclosure risks, with an α Requirement definition used to ensure the diversity requirement. At last, a two-step greedy generalization algorithm is used to carry out the multiple sensitive attributes processing which deal with quasi-identifiers and sensitive attributes respectively. We reduce the overall distortion by the measure of Masking SA.

  12. Efficient hybrid metrology for focus, CD, and overlay

    NASA Astrophysics Data System (ADS)

    Tel, W. T.; Segers, B.; Anunciado, R.; Zhang, Y.; Wong, P.; Hasan, T.; Prentice, C.

    2017-03-01

    In the advent of multiple patterning techniques in semiconductor industry, metrology has progressively become a burden. With multiple patterning techniques such as Litho-Etch-Litho-Etch and Sidewall Assisted Double Patterning, the number of processing step have increased significantly and therefore, so as the amount of metrology steps needed for both control and yield monitoring. The amount of metrology needed is increasing in each and every node as more layers needed multiple patterning steps, and more patterning steps per layer. In addition to this, there is that need for guided defect inspection, which in itself requires substantially denser focus, overlay, and CD metrology as before. Metrology efficiency will therefore be cruicial to the next semiconductor nodes. ASML's emulated wafer concept offers a highly efficient method for hybrid metrology for focus, CD, and overlay. In this concept metrology is combined with scanner's sensor data in order to predict the on-product performance. The principle underlying the method is to isolate and estimate individual root-causes which are then combined to compute the on-product performance. The goal is to use all the information available to avoid ever increasing amounts of metrology.

  13. A Cross-sectional Analysis of Minimum USMLE Step 1 and 2 Criteria Used by Orthopaedic Surgery Residency Programs in Screening Residency Applications.

    PubMed

    Schrock, John B; Kraeutler, Matthew J; Dayton, Michael R; McCarty, Eric C

    2017-06-01

    The purpose of this study was to analyze how program directors (PDs) of orthopaedic surgery residency programs use United States Medical Licensing Examination (USMLE) Step 1 and 2 scores in screening residency applicants. A survey was sent to each allopathic orthopaedic surgery residency PD. PDs were asked if they currently use minimum Step 1 and/or 2 scores in screening residency applicants and if these criteria have changed in recent years. Responses were received from 113 of 151 PDs (75%). One program did not have the requested information and five declined participation, leaving 107 responses analyzed. Eighty-nine programs used a minimum USMLE Step 1 score (83%). Eighty-three programs (78%) required a Step 1 score ≥210, 80 (75%) required a score ≥220, 57 (53%) required a score ≥230, and 22 (21%) required a score ≥240. Multiple PDs mentioned the high volume of applications as a reason for using a minimum score and for increasing the minimum score in recent years. A large proportion of orthopaedic surgery residency PDs use a USMLE Step 1 minimum score when screening applications in an effort to reduce the number of applications to be reviewed.

  14. A small step in VLC systems - a big step in Li-Fi implementation

    NASA Astrophysics Data System (ADS)

    Rîurean, S. M.; Nagy, A. A.; Leba, M.; Ionica, A. C.

    2018-01-01

    Light is part of our sustainable environmental life so, using it would be the handiest and cheapest way for wireless communication. Since ever, light has been used to send messages in different ways and now, due to the high technological improvements, bits through light, at high speed on multiple paths, allow humans to communicate. Using the lighting system both for illumination and communication represents lately one of the worldwide main research issues with several implementations with real benefits. This paper presents a viable VLC system, that proves its sustainability for sending by light information not only few millimetres but meters away. This system has multiple potential applications in different areas where other communication systems are bottlenecked, too expensive, unavailable or even forbidden. Since a Li-Fi fully developed system requires bidirectional, multiple access communication, there are still some challenges towards a functional Li-Fi wireless network. Although important steps have been made, Li-Fi is still under experimental stage.

  15. Elucidating nitric oxide synthase domain interactions by molecular dynamics.

    PubMed

    Hollingsworth, Scott A; Holden, Jeffrey K; Li, Huiying; Poulos, Thomas L

    2016-02-01

    Nitric oxide synthase (NOS) is a multidomain enzyme that catalyzes the production of nitric oxide (NO) by oxidizing L-Arg to NO and L-citrulline. NO production requires multiple interdomain electron transfer steps between the flavin mononucleotide (FMN) and heme domain. Specifically, NADPH-derived electrons are transferred to the heme-containing oxygenase domain via the flavin adenine dinucleotide (FAD) and FMN containing reductase domains. While crystal structures are available for both the reductase and oxygenase domains of NOS, to date there is no atomic level structural information on domain interactions required for the final FMN-to-heme electron transfer step. Here, we evaluate a model of this final electron transfer step for the heme-FMN-calmodulin NOS complex based on the recent biophysical studies using a 105-ns molecular dynamics trajectory. The resulting equilibrated complex structure is very stable and provides a detailed prediction of interdomain contacts required for stabilizing the NOS output state. The resulting equilibrated complex model agrees well with previous experimental work and provides a detailed working model of the final NOS electron transfer step required for NO biosynthesis. © 2015 The Protein Society.

  16. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  17. AQBE — QBE Style Queries for Archetyped Data

    NASA Astrophysics Data System (ADS)

    Sachdeva, Shelly; Yaginuma, Daigo; Chu, Wanming; Bhalla, Subhash

    Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.

  18. Observational study of treatment space in individual neonatal cot spaces.

    PubMed

    Hignett, Sue; Lu, Jun; Fray, Mike

    2010-01-01

    Technology developments in neonatal intensive care units have increased the spatial requirements for clinical activities. Because the effectiveness of healthcare delivery is determined in part by the design of the physical environment and the spatial organization of work, it is appropriate to apply an evidence-based approach to architectural design. This study aimed to provide empirical evidence of the spatial requirements for an individual cot or incubator space. Observational data from 2 simulation exercises were combined with an expert review to produce a final recommendation. A validated 5-step protocol was used to collect data. Step 1 defined the clinical specialty and space. In step 2, data were collected with 28 staff members and 15 neonates to produce a simulation scenario representing the frequent and safety-critical activities. In step 3, 21 staff members participated in functional space experiments to determine the average spatial requirements. Step 4 incorporated additional data (eg, storage and circulation) to produce a spatial recommendation. Finally, the recommendation was reviewed in step 5 by a national expert clinical panel to consider alternative layouts and technology. The average space requirement for an individual neonatal intensive care unit cot (incubator) space was 13.5 m2 (or 145.3 ft2). The circulation and storage space requirements added in step 4 increased this to 18.46 m2 (or 198.7 ft2). The expert panel reviewed the recommendation and agreed that the average individual cot space (13.5 m2/[or 145.3 ft2]) would accommodate variance in working practices. Care needs to be taken when extrapolating this recommendation to multiple cot areas to maintain the minimum spatial requirement.

  19. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  20. Rearrangement of competing U2 RNA helices within the spliceosome promotes multiple steps in splicing

    PubMed Central

    Perriman, Rhonda J.; Ares, Manuel

    2007-01-01

    Nuclear pre-messenger RNA (pre-mRNA) splicing requires multiple spliceosomal small nuclear RNA (snRNA) and pre-mRNA rearrangements. Here we reveal a new snRNA conformational switch in which successive roles for two competing U2 helices, stem IIa and stem IIc, promote distinct splicing steps. When stem IIa is stabilized by loss of stem IIc, rapid ATP-independent and Cus2p-insensitive prespliceosome formation occurs. In contrast, hyperstabilized stem IIc improves the first splicing step on aberrant branchpoint pre-mRNAs and rescues temperature-sensitive U6–U57C, a U6 mutation that also suppresses first-step splicing defects of branchpoint mutations. A second, later role for stem IIa is revealed by its suppression of a cold-sensitive allele of the second-step splicing factor PRP16. Our data expose a spliceosomal progression cycle of U2 stem IIa formation, disruption by stem IIc, and then reformation of stem IIa before the second catalytic step. We propose that the competing stem IIa and stem IIc helices are key spliceosomal RNA elements that optimize juxtaposition of the proper reactive sites during splicing. PMID:17403781

  1. Systems Maintenance Automated Repair Tasks (SMART)

    NASA Technical Reports Server (NTRS)

    Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek

    2010-01-01

    SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.

  2. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  3. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  4. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators.

    PubMed

    Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba

    2017-01-01

    The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.

  5. Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gary D. Egbert

    2007-03-22

    The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less

  6. Portable, one-step, and rapid GMR biosensor platform with smartphone interface.

    PubMed

    Choi, Joohong; Gani, Adi Wijaya; Bechstein, Daniel J B; Lee, Jung-Rok; Utz, Paul J; Wang, Shan X

    2016-11-15

    Quantitative immunoassay tests in clinical laboratories require trained technicians, take hours to complete with multiple steps, and the instruments used are generally immobile-patient samples have to be sent in to the labs for analysis. This prevents quantitative immunoassay tests to be performed outside laboratory settings. A portable, quantitative immunoassay device will be valuable in rural and resource-limited areas, where access to healthcare is scarce or far away. We have invented Eigen Diagnosis Platform (EDP), a portable quantitative immunoassay platform based on Giant Magnetoresistance (GMR) biosensor technology. The platform does not require a trained technician to operate, and only requires one-step user involvement. It displays quantitative results in less than 15min after sample insertion, and each test costs less than US$4. The GMR biosensor employed in EDP is capable of detecting multiple biomarkers in one test, enabling a wide array of immune diagnostics to be performed simultaneously. In this paper, we describe the design of EDP, and demonstrate its capability. Multiplexed assay of human immunoglobulin G and M (IgG and IgM) antibodies with EDP achieves sensitivities down to 0.07 and 0.33 nanomolar, respectively. The platform will allow lab testing to be performed in remote areas, and open up applications of immunoassay testing in other non-clinical settings, such as home, school, and office. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    PubMed

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  8. Design and Implementation of Multi-Input Adaptive Signal Extractions.

    DTIC Science & Technology

    1982-09-01

    deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of

  9. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  10. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    USGS Publications Warehouse

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  11. Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.

    PubMed

    Schoene, Daniel; Delbaere, Kim; Lord, Stephen R

    2017-08-01

    Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  12. Glass wool filters for concentrating waterborne viruses and agricultural zoonotic pathogens

    USDA-ARS?s Scientific Manuscript database

    The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group or genus, for example viruses or Cryptosporidium, requiring multiple methods if the sampling program is targeting more than on...

  13. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD AND ADULT DUPLICATE-DIET SAMPLES

    EPA Science Inventory

    Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...

  14. Contribution of lower limb eccentric work and different step responses to balance recovery among older adults.

    PubMed

    Nagano, Hanatsu; Levinger, Pazit; Downie, Calum; Hayes, Alan; Begg, Rezaul

    2015-09-01

    Falls during walking reflect susceptibility to balance loss and the individual's capacity to recover stability. Balance can be recovered using either one step or multiple steps but both responses are impaired with ageing. To investigate older adults' (n=15, 72.5±4.8 yrs) recovery step control a tether-release procedure was devised to induce unanticipated forward balance loss. Three-dimensional position-time data combined with foot-ground reaction forces were used to measure balance recovery. Dependent variables were; margin of stability (MoS) and available response time (ART) for spatial and temporal balance measures in the transverse and sagittal planes; lower limb joint angles and joint negative/positive work; and spatio-temporal gait parameters. Relative to multi-step responses, single-step recovery was more effective in maintaining balance, indicated by greater MoS and longer ART. MoS in the sagittal plane measure and ART in the transverse plane distinguished single step responses from multiple steps. When MoS and ART were negative (<0), balance was not secured and additional steps would be required to establish the new base of support for balance recovery. Single-step responses demonstrated greater step length and velocity and when the recovery foot landed, greater centre of mass downward velocity. Single-step strategies also showed greater ankle dorsiflexion, increased knee maximum flexion and more negative work at the ankle and knee. Collectively these findings suggest that single-step responses are more effective in forward balance recovery by directing falling momentum downward to be absorbed as lower limb eccentric work. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Minimum number of days required for a reliable estimate of daily step count and energy expenditure, in people with MS who walk unaided.

    PubMed

    Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan

    2017-03-01

    The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions.

    PubMed

    Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali

    2017-06-01

    Stepped-wedge design (SWD) cluster-randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in an SWD trial. We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster-randomized trial: concurrent, replacement, supplementation, and factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. In the concurrent SWD, each cluster receives only one intervention, unlike the other variants. The replacement SWD supports two interventions that will not or cannot be used at the same time. The supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Development of a Rubric to Improve Critical Thinking

    ERIC Educational Resources Information Center

    Hildenbrand, Kasee J.; Schultz, Judy A.

    2012-01-01

    Context: Health care professionals, including athletic trainers are confronted daily with multiple complex problems that require critical thinking. Objective: This research attempts to develop a reliable process to assess students' critical thinking in a variety of athletic training and kinesiology courses. Design: Our first step was to create a…

  18. A rapid, one step molecular identification of Trichoderma citrinoviride and Trichoderma reesei.

    PubMed

    Saroj, Dina B; Dengeti, Shrinivas N; Aher, Supriya; Gupta, Anil K

    2015-06-01

    Trichoderma species are widely used as production hosts for industrial enzymes. Identification of Trichoderma species requires a complex molecular biology based identification involving amplification and sequencing of multiple genes. Industrial laboratories are required to run identification tests repeatedly in cell banking procedures and also to prove absence of production host in the product. Such demands can be fulfilled by a brief method which enables confirmation of strain identity. This communication describes one step identification method for two common Trichoderma species; T. citrinoviride and T. reesei, based on identification of polymorphic region in the nucleotide sequence of translation elongation factor 1 alpha. A unique forward primer and common reverse primer resulted in 153 and 139 bp amplicon for T. citrinoviride and T. reesei, respectively. Simplification was further introduced by using mycelium as template for PCR amplification. Method described in this communication allows rapid, one step identification of two Trichoderma species.

  19. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    PubMed

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. Loss of laminin alpha 1 results in multiple structural defects and divergent effects on adhesion during vertebrate optic cup morphogenesis

    PubMed Central

    Bryan, Chase D.; Chien, Chi-Bin; Kwan, Kristen M.

    2016-01-01

    The vertebrate eye forms via a complex set of morphogenetic events. The optic vesicle evaginates and undergoes transformative shape changes to form the optic cup, in which neural retina and retinal pigmented epithelium enwrap the lens. It has long been known that a complex, glycoprotein-rich extracellular matrix layer surrounds the developing optic cup throughout the process, yet the functions of the matrix and its specific molecular components have remained unclear. Previous work established a role for laminin extracellular matrix in particular steps of eye development, including optic vesicle evagination, lens differentiation, and retinal ganglion cell polarization, yet it is unknown what role laminin might play in the early process of optic cup formation subsequent to the initial step of optic vesicle evagination. Here, we use the zebrafish lama1 mutant (lama1UW1) to determine the function of laminin during optic cup morphogenesis. Using live imaging, we find, surprisingly, that loss of laminin leads to divergent effects on focal adhesion assembly in a spatiotemporally-specific manner, and that laminin is required for multiple steps of optic cup morphogenesis, including optic stalk constriction, invagination, and formation of a spherical lens. Laminin is not required for single cell behaviors and changes in cell shape. Rather, in lama1UW1 mutants, loss of epithelial polarity and altered adhesion lead to defective tissue architecture and formation of a disorganized retina. These results demonstrate that the laminin extracellular matrix plays multiple critical roles regulating adhesion and polarity to establish and maintain tissue structure during optic cup morphogenesis. PMID:27339294

  1. Effectiveness of a five-step method for teaching clinical skills to students in a dental college in India.

    PubMed

    Virdi, Mandeep S; Sood, Meenakshi

    2011-11-01

    This study conducted at the PDM Dental College and Research Institute, Haryana, India, had the purpose of developing a teaching method based upon a five-step method for teaching clinical skills to students proposed by the American College of Surgeons. This five-step teaching method was used to place fissure sealants as an initial procedure by dental students in clinics. The sealant retention was used as an objective evaluation of the skill learnt by the students. The sealant retention was 92 percent at six- and twelve-month evaluations and 90 percent at the eighteen-month evaluation. These results indicate that simple methods can be devised for teaching clinical skills and achieve high success rates in clinical procedures requiring multiple steps.

  2. The value of decision models: Using ecologically based invasive plant management as an example

    USDA-ARS?s Scientific Manuscript database

    Humans have both fast and slow thought processes which influence our judgment and decision-making. The fast system is intuitive and valuable for decisions which do not require multiple steps or the application of logic or statistics. However, many decisions in natural resources are complex and req...

  3. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  4. Semi-autonomous remote sensing time series generation tool

    NASA Astrophysics Data System (ADS)

    Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher

    2017-10-01

    High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.

  5. Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions

    PubMed Central

    Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali

    2018-01-01

    Objective Stepped wedge design (SWD) cluster randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in a SWD trial. Study Design and Setting We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster randomized trial: Concurrent, Replacement, Supplementation and Factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. Results In the Concurrent SWD, each cluster receives only one intervention, unlike the other variants. The Replacement SWD supports two interventions that will not or cannot be employed at the same time. The Supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the Factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Conclusion Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. PMID:28412466

  6. Design-for-manufacture of gradient-index optical systems using time-varying boundary condition diffusion

    NASA Astrophysics Data System (ADS)

    Harkrider, Curtis Jason

    2000-08-01

    The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.

  7. Feasibility of Focused Stepping Practice During Inpatient Rehabilitation Poststroke and Potential Contributions to Mobility Outcomes.

    PubMed

    Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot

    2015-01-01

    Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (<300 steps/session). The purpose of this investigation was to determine the feasibility of providing focused stepping training to patients early poststroke and its potential association with walking and other mobility outcomes. Daily stepping was recorded on 201 patients <6 months poststroke (80% < 1 month) during inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.

  8. Velocity and Drag Forces on motor-protein-driven Vesicles in Cells

    NASA Astrophysics Data System (ADS)

    Hill, David; Holzwarth, George; Bonin, Keith

    2002-10-01

    In cells, vesicle transport is driven by motor proteins such as kinesin and dynein, which use the chemical energy of ATP to overcome drag. Using video-enhanced DIC microscopy at 8 frames/s, we find that vesicles in PC12 neurites move with an average velocity of 1.52 0.66 μm/s. The drag force and work required for such steady movement, calculated from Stokes' Law and the zero-frequency viscosity of the cytoplasm, suggest that multiple motors are required to move one vesicle. In buffer, single kinesin molecules move beads in 8-nm steps, each step taking only 50 μs [1]. The effects of such quick steps in cytoplasm, using viscoelastic moduli of COS7 cells, are small [2]. To measure drag forces more directly, we are using B-field-driven magnetic beads in PC12 cells to mimic kinesin-driven vesicles. [1] Nishiyama, M. et al., Nat. Cell Bio. 3, 425-428 (2001). [2] Holzwarth, Bonin, and Hill, Biophys J 82, 1784-1790 (2002).

  9. A simplified and efficient method for the analysis of fatty acid methyl esters suitable for large clinical studies.

    PubMed

    Masood, Athar; Stark, Ken D; Salem, Norman

    2005-10-01

    Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.

  10. Advantages offered by high average power picosecond lasers

    NASA Astrophysics Data System (ADS)

    Moorhouse, C.

    2011-03-01

    As electronic devices shrink in size to reduce material costs, device size and weight, thinner material thicknesses are also utilized. Feature sizes are also decreasing, which is pushing manufacturers towards single step laser direct write process as an attractive alternative to conventional, multiple step photolithography processes by eliminating process steps and the cost of chemicals. The fragile nature of these thin materials makes them difficult to machine either mechanically or with conventional nanosecond pulsewidth, Diode Pumped Solids State (DPSS) lasers. Picosecond laser pulses can cut materials with reduced damage regions and selectively remove thin films due to the reduced thermal effects of the shorter pulsewidth. Also, the high repetition rate allows high speed processing for industrial applications. Selective removal of thin films for OLED patterning, silicon solar cells and flat panel displays is discussed, as well as laser cutting of transparent materials with low melting point such as Polyethylene Terephthalate (PET). For many of these thin film applications, where low pulse energy and high repetition rate are required, throughput can be increased by the use of a novel technique to using multiple beams from a single laser source is outlined.

  11. PLE in the analysis of plant compounds. Part II: One-cycle PLE in determining total amount of analyte in plant material.

    PubMed

    Dawidowicz, Andrzej L; Wianowska, Dorota

    2005-04-29

    Pressurised liquid extraction (PLE) is recognised as one of the most effective sample preparation methods. Despite the enhanced extraction power of PLE, the full recovery of an analyte from plant material may require multiple extractions of the same sample. The presented investigations show the possibility of estimating the true concentration value of an analyte in plant material employing one-cycle PLE in which plant samples of different weight are used. The performed experiments show a linear dependence between the reciprocal value of the analyte amount (E*), extracted in single-step PLE from a plant matrix, and the ratio of plant material mass to extrahent volume (m(p)/V(s)). Hence, time-consuming multi-step PLE can be replaced by a few single-step PLEs performed at different (m(p)/V(s)) ratios. The concentrations of rutin in Sambucus nigra L. and caffeine in tea and coffee estimated by means of the tested procedure are almost the same as their concentrations estimated by multiple PLE.

  12. Multiple Criteria Decision Analysis for Health Care Decision Making--Emerging Good Practices: Report 2 of the ISPOR MCDA Emerging Good Practices Task Force.

    PubMed

    Marsh, Kevin; IJzerman, Maarten; Thokala, Praveen; Baltussen, Rob; Boysen, Meindert; Kaló, Zoltán; Lönngren, Thomas; Mussen, Filip; Peacock, Stuart; Watkins, John; Devlin, Nancy

    2016-01-01

    Health care decisions are complex and involve confronting trade-offs between multiple, often conflicting objectives. Using structured, explicit approaches to decisions involving multiple criteria can improve the quality of decision making. A set of techniques, known under the collective heading, multiple criteria decision analysis (MCDA), are useful for this purpose. In 2014, ISPOR established an Emerging Good Practices Task Force. The task force's first report defined MCDA, provided examples of its use in health care, described the key steps, and provided an overview of the principal methods of MCDA. This second task force report provides emerging good-practice guidance on the implementation of MCDA to support health care decisions. The report includes: a checklist to support the design, implementation and review of an MCDA; guidance to support the implementation of the checklist; the order in which the steps should be implemented; illustrates how to incorporate budget constraints into an MCDA; provides an overview of the skills and resources, including available software, required to implement MCDA; and future research directions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. A Practical Comparison of Motion Planning Techniques for Robotic Legs in Environments with Obstacles

    NASA Technical Reports Server (NTRS)

    Smith, Tristan B.; Chavez-Clemente, Daniel

    2009-01-01

    ATHLETE is a large six-legged tele-operated robot. Each foot is a wheel; travel can be achieved by walking, rolling, or some combination of the two. Operators control ATHLETE by selecting parameterized commands from a command dictionary. While rolling can be done efficiently, any motion involving steps is cumbersome - each step can require multiple commands and take many minutes to complete. In this paper, we consider four different algorithms that generate a sequence of commands to take a step. We consider a baseline heuristic, a randomized motion planning algorithm, and two variants of A* search. Results for a variety of terrains are presented, and we discuss the quantitative and qualitative tradeoffs between the approaches.

  14. Production of genome-edited pluripotent stem cells and mice by CRISPR/Cas.

    PubMed

    Horii, Takuro; Hatada, Izuho

    2016-01-01

    Clustered regularly at interspaced short palindromic repeats (CRISPR) and CRISPR-associated (Cas) nucleases, so-called CRISPR/Cas, was recently developed as an epoch-making genome engineering technology. This system only requires Cas9 nuclease and single-guide RNA complementary to a target locus. CRISPR/Cas enables the generation of knockout cells and animals in a single step. This system can also be used to generate multiple mutations and knockin in a single step, which is not possible using other methods. In this review, we provide an overview of genome editing by CRISPR/Cas in pluripotent stem cells and mice.

  15. Does the MCAT predict medical school and PGY-1 performance?

    PubMed

    Saguil, Aaron; Dong, Ting; Gingerich, Robert J; Swygert, Kimberly; LaRochelle, Jeffrey S; Artino, Anthony R; Cruess, David F; Durning, Steven J

    2015-04-01

    The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U. S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice-based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  16. Neurofibromatoses: part 1 - diagnosis and differential diagnosis.

    PubMed

    Rodrigues, Luiz Oswaldo Carneiro; Batista, Pollyanna Barros; Goloni-Bertollo, Eny Maria; de Souza-Costa, Danielle; Eliam, Lucas; Eliam, Miguel; Cunha, Karin Soares Gonçalves; Darrigo-Junior, Luiz Guilherme; Ferraz-Filho, José Roberto Lopes; Geller, Mauro; Gianordoli-Nascimento, Ingrid F; Madeira, Luciana Gonçalves; Malloy-Diniz, Leandro Fernandes; Mendes, Hérika Martins; de Miranda, Débora Marques; Pavarino, Erika Cristina; Baptista-Pereira, Luciana; Rezende, Nilton A; Rodrigues, Luíza de Oliveira; da Silva, Carla Menezes; de Souza, Juliana Ferreira; de Souza, Márcio Leandro Ribeiro; Stangherlin, Aline; Valadares, Eugênia Ribeiro; Vidigal, Paula Vieira Teixeira

    2014-03-01

    Neurofibromatoses (NF) are a group of genetic multiple tumor growing predisposition diseases: neurofibromatosis type 1 (NF1), neurofibromatosis type 2 (NF2) and schwannomatosis (SCH), which have in common the neural origin of tumors and cutaneous signs. They affect nearly 80 thousand of Brazilians. In recent years, the increased scientific knowledge on NF has allowed better clinical management and reduced complication morbidity, resulting in higher quality of life for NF patients. In most cases, neurology, psychiatry, dermatology, clinical geneticists, oncology and internal medicine specialists are able to make the differential diagnosis between NF and other diseases and to identify major NF complications. Nevertheless, due to its great variability in phenotype expression, progressive course, multiple organs involvement and unpredictable natural evolution, NF often requires the support of neurofibromatoses specialists for proper treatment and genetic counseling. This Part 1 offers step-by-step guidelines for NF differential diagnosis. Part 2 will present the NF clinical management.

  17. The Overgrid Interface for Computational Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.

  18. Semantic Structures of One-Step Word Problems Involving Multiplication or Division.

    ERIC Educational Resources Information Center

    Schmidt, Siegbert; Weiser, Werner

    1995-01-01

    Proposes a four-category classification of semantic structures of one-step word problems involving multiplication and division: forming the n-th multiple of measures, combinatorial multiplication, composition of operators, and multiplication by formula. This classification is compatible with semantic structures of addition and subtraction word…

  19. Equalizer tap length requirement for mode group delay-compensated fiber link with weakly random mode coupling.

    PubMed

    Bai, Neng; Li, Guifang

    2014-02-24

    The equalizer tap length requirement is investigated analytically and numerically for differential modal group delay (DMGD) compensated fiber link with weakly random mode coupling. Each span of the DMGD compensated link comprises multiple pairs of fibers which have opposite signs of DMGD. The result reveals that under weak random mode coupling, the required tap length of the equalizer is proportional to modal group delay of a single DMGD compensated pair, instead of the total modal group delay (MGD) of the entire link. By using small DMGD compensation step sizes, the required tap length (RTL) can be potentially reduced by 2 orders of magnitude.

  20. So you think you've designed an effective recruitment protocol?

    PubMed

    Green, Cara; Vandall-Walker, Virginia

    2017-03-22

    Background Recruiting acutely ill patients to participate in research can be challenging. This paper outlines the difficulties the first author encountered in a study and the steps she took to overcome problems with research ethics, gain access to participants and implement a recruitment protocol in multiple hospitals. It also compares these steps with literature related to recruitment. Aim To inform and inspire neophyte researchers about the need for planning and resilience when dealing with recruitment challenges in multiple hospitals. Discussion The multiple enablers and barriers to the successful implementation of a hospital-based study recruitment protocol are explored based on a neophyte researcher's optimistic assumptions about this stage of the study. Conclusions Perseverance, adequately planning for contingencies, and accepting the barriers and challenges to recruitment are essential for completing one's research study and ensuring fulfilment as a researcher. Implications for practice Healthcare students carrying out research require adequate knowledge about conducting hospital-based, patient research to inform their recruitment plan. Maximising control over recruitment, allowing for adequate time to conduct data collection, and maintaining a good work ethic will help to ensure success.

  1. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  2. One-step formation of w/o/w multiple emulsions stabilized by single amphiphilic block copolymers.

    PubMed

    Hong, Liangzhi; Sun, Guanqing; Cai, Jinge; Ngai, To

    2012-02-07

    Multiple emulsions are complex polydispersed systems in which both oil-in-water (O/W) and water-in-oil (W/O) emulsion exists simultaneously. They are often prepared accroding to a two-step process and commonly stabilized using a combination of hydrophilic and hydrophobic surfactants. Recently, some reports have shown that multiple emulsions can also be produced through one-step method with simultaneous occurrence of catastrophic and transitional phase inversions. However, these reported multiple emulsions need surfactant blends and are usually described as transitory or temporary systems. Herein, we report a one-step phase inversion process to produce water-in-oil-in-water (W/O/W) multiple emulsions stabilized solely by a synthetic diblock copolymer. Unlike the use of small molecule surfactant combinations, block copolymer stabilized multiple emulsions are remarkably stable and show the ability to separately encapsulate both polar and nonpolar cargos. The importance of the conformation of the copolymer surfactant at the interfaces with regards to the stability of the multiple emulsions using the one-step method is discussed.

  3. Automating quantum dot barcode assays using microfluidics and magnetism for the development of a point-of-care device.

    PubMed

    Gao, Yali; Lam, Albert W Y; Chan, Warren C W

    2013-04-24

    The impact of detecting multiple infectious diseases simultaneously at point-of-care with good sensitivity, specificity, and reproducibility would be enormous for containing the spread of diseases in both resource-limited and rich countries. Many barcoding technologies have been introduced for addressing this need as barcodes can be applied to detecting thousands of genetic and protein biomarkers simultaneously. However, the assay process is not automated and is tedious and requires skilled technicians. Barcoding technology is currently limited to use in resource-rich settings. Here we used magnetism and microfluidics technology to automate the multiple steps in a quantum dot barcode assay. The quantum dot-barcoded microbeads are sequentially (a) introduced into the chip, (b) magnetically moved to a stream containing target molecules, (c) moved back to the original stream containing secondary probes, (d) washed, and (e) finally aligned for detection. The assay requires 20 min, has a limit of detection of 1.2 nM, and can detect genetic targets for HIV, hepatitis B, and syphilis. This study provides a simple strategy to automate the entire barcode assay process and moves barcoding technologies one step closer to point-of-care applications.

  4. Influence of the Supramolecular Micro-Assembly of Multiple Emulsions on their Biopharmaceutical Features and In vivo Therapeutic Response.

    PubMed

    Cilurzo, Felisa; Cristiano, Maria Chiara; Di Marzio, Luisa; Cosco, Donato; Carafa, Maria; Ventura, Cinzia Anna; Fresta, Massimo; Paolino, Donatella

    2015-01-01

    The ability of some surfactants to self-assemble in a water/oil bi-phase environment thus forming supramolecular structure leading to the formation of w/o/w multiple emulsions was investigated. The w/o/w multiple emulsions obtained by self-assembling (one-step preparation method) were compared with those prepared following the traditional two-step procedure. Methyl-nicotinate was used as a hydrophilic model drug. The formation of the multiple emulsion structure was evidenced by optical microscopy, which showed a mean size of the inner oil droplets of 6 μm and 10 μm for one-step and two-step multiple emulsions, respectively. The in vitrobiopharmaceutical features of the various w/o/w multiple emulsion formulations were evaluated by means of viscosimetry studies, drug release and in vitro percutaneous permeation experiments through human stratum corneum and viable epidermis membranes. The self-assembled multiple emulsions allowed a more gradual percutaneous permeation (a zero-order permeation rate) than the two-step ones. The in vivotopical carrier properties of the two different multiple emulsions were evaluated on healthy human volunteers by using the spectrophotometry of reflectance, an in vivonon invasive method. These multiple emulsion systems were also compared with conventional emulsion formulations. Our findings demonstrated that the multiple emulsions obtained by self-assembling were able to provide a more sustained drug delivery into the skin and hence a longer therapeutic action than two-step multiple emulsions and conventional emulsion formulations. Finally, our findings showed that the supramolecular micro-assembly of multiple emulsions was able to influence not only the biopharmaceutical characteristics but also the potential in vivotherapeutic response.

  5. Modeling fatigue.

    PubMed

    Sumner, Walton; Xu, Jin Zhong

    2002-01-01

    The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.

  6. A microfluidic device for preparing next generation DNA sequencing libraries and for automating other laboratory protocols that require one or more column chromatography steps.

    PubMed

    Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F

    2013-01-01

    Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.

  7. A Microfluidic Device for Preparing Next Generation DNA Sequencing Libraries and for Automating Other Laboratory Protocols That Require One or More Column Chromatography Steps

    PubMed Central

    Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.

    2013-01-01

    Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273

  8. Preliminary Work for Examining the Scalability of Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Clouse, Jeff

    1998-01-01

    Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.

  9. Atmospheric Pressure Plasma Jet as a Dry Alternative to Inkjet Printing in Flexible Electronics

    NASA Technical Reports Server (NTRS)

    Gandhiraman, Ram Prasad; Lopez, Arlene; Koehne, Jessica; Meyyappan, M.

    2016-01-01

    We have developed an atmospheric pressure plasma jet printing system that works at room temperature to 50 deg C unlike conventional aerosol assisted techniques which require a high temperature sintering step to obtain desired thin films. Multiple jets can be configured to increase throughput or to deposit multiple materials, and the jet(s) can be moved across large areas using a x-y stage. The plasma jet has been used to deposit carbon nanotubes, graphene, silver nanowires, copper nanoparticles and other materials on substrates such as paper, cotton, plastic and thin metal foils.

  10. One-step production of multiple emulsions: microfluidic, polymer-stabilized and particle-stabilized approaches.

    PubMed

    Clegg, Paul S; Tavacoli, Joe W; Wilde, Pete J

    2016-01-28

    Multiple emulsions have great potential for application in food science as a means to reduce fat content or for controlled encapsulation and release of actives. However, neither production nor stability is straightforward. Typically, multiple emulsions are prepared via two emulsification steps and a variety of approaches have been deployed to give long-term stability. It is well known that multiple emulsions can be prepared in a single step by harnessing emulsion inversion, although the resulting emulsions are usually short lived. Recently, several contrasting methods have been demonstrated which give rise to stable multiple emulsions via one-step production processes. Here we review the current state of microfluidic, polymer-stabilized and particle-stabilized approaches; these rely on phase separation, the role of electrolyte and the trapping of solvent with particles respectively.

  11. A high-throughput semi-automated preparation for filtered synaptoneurosomes.

    PubMed

    Murphy, Kathryn M; Balsor, Justin; Beshara, Simon; Siu, Caitlin; Pinto, Joshua G A

    2014-09-30

    Synaptoneurosomes have become an important tool for studying synaptic proteins. The filtered synaptoneurosomes preparation originally developed by Hollingsworth et al. (1985) is widely used and is an easy method to prepare synaptoneurosomes. The hand processing steps in that preparation, however, are labor intensive and have become a bottleneck for current proteomic studies using synaptoneurosomes. For this reason, we developed new steps for tissue homogenization and filtration that transform the preparation of synaptoneurosomes to a high-throughput, semi-automated process. We implemented a standardized protocol with easy to follow steps for homogenizing multiple samples simultaneously using a FastPrep tissue homogenizer (MP Biomedicals, LLC) and then filtering all of the samples in centrifugal filter units (EMD Millipore, Corp). The new steps dramatically reduce the time to prepare synaptoneurosomes from hours to minutes, increase sample recovery, and nearly double enrichment for synaptic proteins. These steps are also compatible with biosafety requirements for working with pathogen infected brain tissue. The new high-throughput semi-automated steps to prepare synaptoneurosomes are timely technical advances for studies of low abundance synaptic proteins in valuable tissue samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Fast-tracking determination of homozygous transgenic lines and transgene stacking using a reliable quantitative real-time PCR assay.

    PubMed

    Wang, Xianghong; Jiang, Daiming; Yang, Daichang

    2015-01-01

    The selection of homozygous lines is a crucial step in the characterization of newly generated transgenic plants. This is particularly time- and labor-consuming when transgenic stacking is required. Here, we report a fast and accurate method based on quantitative real-time PCR with a rice gene RBE4 as a reference gene for selection of homozygous lines when using multiple transgenic stacking in rice. Use of this method allowed can be used to determine the stacking of up to three transgenes within four generations. Selection accuracy reached 100 % for a single locus and 92.3 % for two loci. This method confers distinct advantages over current transgenic research methodologies, as it is more accurate, rapid, and reliable. Therefore, this protocol could be used to efficiently select homozygous plants and to expedite time- and labor-consuming processes normally required for multiple transgene stacking. This protocol was standardized for determination of multiple gene stacking in molecular breeding via marker-assisted selection.

  13. The Effects of Multiple-Step and Single-Step Directions on Fourth and Fifth Grade Students' Grammar Assessment Performance

    ERIC Educational Resources Information Center

    Mazerik, Matthew B.

    2006-01-01

    The mean scores of English Language Learners (ELL) and English Only (EO) students in 4th and 5th grade (N = 110), across the teacher-administered Grammar Skills Test, were examined for differences in participants' scores on assessments containing single-step directions and assessments containing multiple-step directions. The results indicated no…

  14. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    PubMed

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Comparing the development of the multiplication of fractions in Turkish and American textbooks

    NASA Astrophysics Data System (ADS)

    Kar, Tuğrul; Güler, Gürsel; Şen, Ceylan; Özdemir, Ercan

    2018-02-01

    This study analyzed the methods used to teach the multiplication of fractions in Turkish and American textbooks. Two Turkish textbooks and two American textbooks, Everyday Mathematics (EM) and Connected Mathematics 3 (CM), were analyzed. The analyses focused on the content and the nature of the mathematical problems presented in the textbooks. The findings of the study showed that the American textbooks aimed at developing conceptual understanding first and then procedural fluency, whereas the Turkish textbooks aimed at developing both concurrently. The American textbooks provided more opportunities for different computational strategies. The solutions to most problems in all textbooks required a single computational step, a numerical answer, and procedural knowledge. Furthermore, compared with the Turkish textbooks, the American textbooks contained a greater number of problems that required high-level cognitive skills such as mathematical reasoning.

  16. Technical note: Avoiding the direct inversion of the numerator relationship matrix for genotyped animals in single-step genomic best linear unbiased prediction solved with the preconditioned conjugate gradient.

    PubMed

    Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I

    2017-01-01

    This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.

  17. Stretching single atom contacts at multiple subatomic step-length.

    PubMed

    Wei, Yi-Min; Liang, Jing-Hong; Chen, Zhao-Bin; Zhou, Xiao-Shun; Mao, Bing-Wei; Oviedo, Oscar A; Leiva, Ezequiel P M

    2013-08-14

    This work describes jump-to-contact STM-break junction experiments leading to novel statistical distribution of last-step length associated with conductance of a single atom contact. Last-step length histograms are observed with up to five for Fe and three for Cu peaks at integral multiples close to 0.075 nm, a subatomic distance. A model is proposed in terms of gliding from a fcc hollow-site to a hcp hollow-site of adjacent atomic planes at 1/3 regular layer spacing along with tip stretching to account for the multiple subatomic step-length behavior.

  18. The HIV care continuum: no partial credit given.

    PubMed

    McNairy, Margaret L; El-Sadr, Wafaa M

    2012-09-10

    Despite significant scale-up of HIV care and treatment across the world, overall effectiveness of HIV programs is severely undermined by attrition of patients across the HIV care continuum, both in resource-rich and resource-limited settings. The care continuum has four essential steps: linkage from testing to enrollment in care, determination of antiretroviral therapy (ART) eligibility, ART initiation, and adherence to medications to achieve viral suppression. In order to substantially improve health outcomes for the individual and potentially for prevention of transmission to others, each of the steps of the entire care continuum must be achieved. This will require the adoption of interventions that address the multiplicity of barriers and social contexts faced by individuals and populations across each step, a reconceptualization of services to maximize engagement in care, and ambitious evaluation of program performance using all-or-none measurement.

  19. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  20. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    NASA Astrophysics Data System (ADS)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  1. A modular platform for one-step assembly of multi-component membrane systems by fusion of charged proteoliposomes

    NASA Astrophysics Data System (ADS)

    Ishmukhametov, Robert R.; Russell, Aidan N.; Berry, Richard M.

    2016-10-01

    An important goal in synthetic biology is the assembly of biomimetic cell-like structures, which combine multiple biological components in synthetic lipid vesicles. A key limiting assembly step is the incorporation of membrane proteins into the lipid bilayer of the vesicles. Here we present a simple method for delivery of membrane proteins into a lipid bilayer within 5 min. Fusogenic proteoliposomes, containing charged lipids and membrane proteins, fuse with oppositely charged bilayers, with no requirement for detergent or fusion-promoting proteins, and deliver large, fragile membrane protein complexes into the target bilayers. We demonstrate the feasibility of our method by assembling a minimal electron transport chain capable of adenosine triphosphate (ATP) synthesis, combining Escherichia coli F1Fo ATP-synthase and the primary proton pump bo3-oxidase, into synthetic lipid vesicles with sizes ranging from 100 nm to ~10 μm. This provides a platform for the combination of multiple sets of membrane protein complexes into cell-like artificial structures.

  2. Multiscale connectivity and graph theory highlight critical areas for conservation under climate change.

    PubMed

    Dilt, Thomas E; Weisberg, Peter J; Leitner, Philip; Matocq, Marjorie D; Inman, Richard D; Nussear, Kenneth E; Esque, Todd C

    2016-06-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multiscale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods, including graph theory, circuit theory, and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this threatened Californian species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously distributed habitat and should be applicable across a broad range of taxa.

  3. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  4. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets.

    PubMed

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-07-17

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.

  5. Gemi: PCR Primers Prediction from Multiple Alignments

    PubMed Central

    Sobhy, Haitham; Colson, Philippe

    2012-01-01

    Designing primers and probes for polymerase chain reaction (PCR) is a preliminary and critical step that requires the identification of highly conserved regions in a given set of sequences. This task can be challenging if the targeted sequences display a high level of diversity, as frequently encountered in microbiologic studies. We developed Gemi, an automated, fast, and easy-to-use bioinformatics tool with a user-friendly interface to design primers and probes based on multiple aligned sequences. This tool can be used for the purpose of real-time and conventional PCR and can deal efficiently with large sets of sequences of a large size. PMID:23316117

  6. Two Simple and Efficient Algorithms to Compute the SP-Score Objective Function of a Multiple Sequence Alignment.

    PubMed

    Ranwez, Vincent

    2016-01-01

    Multiple sequence alignment (MSA) is a crucial step in many molecular analyses and many MSA tools have been developed. Most of them use a greedy approach to construct a first alignment that is then refined by optimizing the sum of pair score (SP-score). The SP-score estimation is thus a bottleneck for most MSA tools since it is repeatedly required and is time consuming. Given an alignment of n sequences and L sites, I introduce here optimized solutions reaching O(nL) time complexity for affine gap cost, instead of O(n2L), which are easy to implement.

  7. Find-me and eat-me signals in apoptotic cell clearance: progress and conundrums

    PubMed Central

    2010-01-01

    Everyday we turnover billions of cells. The quick, efficient, and immunologically silent disposal of the dying cells requires a coordinated orchestration of multiple steps, through which phagocytes selectively recognize and engulf apoptotic cells. Recent studies have suggested an important role for soluble mediators released by apoptotic cells that attract phagocytes (“find-me” signals). New information has also emerged on multiple receptors that can recognize phosphatidylserine, the key “eat-me” signal exposed on the surface of apoptotic cells. This perspective discusses recent exciting progress, gaps in our understanding, and the conflicting issues that arise from the newly acquired knowledge. PMID:20805564

  8. Quantification of triglyceride content in oleaginous materials using thermo-gravimetry

    DOE PAGES

    Maddi, Balakrishna; Vadlamani, Agasteswar; Viamajala, Sridhar; ...

    2017-10-16

    Laboratory analytical methods for quantification of triglyceride content in oleaginous biomass samples, especially microalgae, require toxic chemicals and/or organic solvents and involve multiple steps. We describe a simple triglyceride quantification method that uses thermo-gravimetry. This method is based on the observation that triglycerides undergo near-complete volatilization/degradation over a narrow temperature interval with a derivative weight loss peak at 420 °C when heated in an inert atmosphere. Degradation of the other constituents of oleaginous biomass (protein and carbohydrates) is largely complete after prolonged exposure of samples at 320 °C. Based on these observations, the triglyceride content of oleaginous biomass was estimatedmore » by using the following two-step process. In Step 1, samples were heated to 320 °C and kept isothermal at this temperature for 15 min. In Step 2, samples were heated from 320 °C to 420 °C and then kept isothermal at 420 °C for 15 min. The results show that mass loss in step 2 correlated well with triglyceride content estimates obtained from conventional techniques for diverse microalgae and oilseed samples.« less

  9. Quantification of triglyceride content in oleaginous materials using thermo-gravimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddi, Balakrishna; Vadlamani, Agasteswar; Viamajala, Sridhar

    Laboratory analytical methods for quantification of triglyceride content in oleaginous biomass samples, especially microalgae, require toxic chemicals and/or organic solvents and involve multiple steps. We describe a simple triglyceride quantification method that uses thermo-gravimetry. This method is based on the observation that triglycerides undergo near-complete volatilization/degradation over a narrow temperature interval with a derivative weight loss peak at 420 °C when heated in an inert atmosphere. Degradation of the other constituents of oleaginous biomass (protein and carbohydrates) is largely complete after prolonged exposure of samples at 320 °C. Based on these observations, the triglyceride content of oleaginous biomass was estimatedmore » by using the following two-step process. In Step 1, samples were heated to 320 °C and kept isothermal at this temperature for 15 min. In Step 2, samples were heated from 320 °C to 420 °C and then kept isothermal at 420 °C for 15 min. The results show that mass loss in step 2 correlated well with triglyceride content estimates obtained from conventional techniques for diverse microalgae and oilseed samples.« less

  10. Selectivity Mechanism of the Nuclear Pore Complex Characterized by Single Cargo Tracking

    PubMed Central

    Lowe, Alan R.; Siegel, Jake J.; Kalab, Petr; Siu, Merek; Weis, Karsten; Liphardt, Jan T.

    2010-01-01

    The Nuclear Pore Complex (NPC) mediates all exchange between the cytoplasm and the nucleus. Small molecules can passively diffuse through the NPC, while larger cargos require transport receptors to translocate1. How the NPC facilitates the translocation of transport receptor/cargo complexes remains unclear. Here, we track single protein-functionalized Quantum Dot (QD) cargos as they translocate the NPC. Import proceeds by successive sub-steps comprising cargo capture, filtering and translocation, and release into the nucleus. The majority of QDs are rejected at one of these steps and return to the cytoplasm including very large cargos that abort at a size-selective barrier. Cargo movement in the central channel is subdiffusive and cargos that can bind more transport receptors diffuse more freely. Without Ran, cargos still explore the entire NPC, but have a markedly reduced probability of exit into the nucleus, suggesting that NPC entry and exit steps are not equivalent and that the pore is functionally asymmetric to importing cargos. The overall selectivity of the NPC appears to arise from the cumulative action of multiple reversible sub-steps and a final irreversible exit step. PMID:20811366

  11. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  12. Using string invariants for prediction searching for optimal parameters

    NASA Astrophysics Data System (ADS)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  13. Human Cytomegalovirus Strategies to Maintain and Promote mRNA Translation

    PubMed Central

    Vincent, Heather A.; Ziehr, Benjamin; Moorman, Nathaniel J.

    2016-01-01

    mRNA translation requires the ordered assembly of translation initiation factors and ribosomal subunits on a transcript. Host signaling pathways regulate each step in this process to match levels of protein synthesis to environmental cues. In response to infection, cells activate multiple defenses that limit viral protein synthesis, which viruses must counteract to successfully replicate. Human cytomegalovirus (HCMV) inhibits host defenses that limit viral protein expression and manipulates host signaling pathways to promote the expression of both host and viral proteins necessary for virus replication. Here we review key regulatory steps in mRNA translation, and the strategies used by HCMV to maintain protein synthesis in infected cells. PMID:27089357

  14. Managing bay and estuarine ecosystems for multiple services

    USGS Publications Warehouse

    Needles, Lisa A.; Lester, Sarah E.; Ambrose, Richard; Andren, Anders; Beyeler, Marc; Connor, Michael S.; Eckman, James E.; Costa-Pierce, Barry A.; Gaines, Steven D.; Lafferty, Kevin D.; Lenihan, Junter S.; Parrish, Julia; Peterson, Mark S.; Scaroni, Amy E.; Weis, Judith S.; Wendt, Dean E.

    2013-01-01

    Managers are moving from a model of managing individual sectors, human activities, or ecosystem services to an ecosystem-based management (EBM) approach which attempts to balance the range of services provided by ecosystems. Applying EBM is often difficult due to inherent tradeoffs in managing for different services. This challenge particularly holds for estuarine systems, which have been heavily altered in most regions and are often subject to intense management interventions. Estuarine managers can often choose among a range of management tactics to enhance a particular service; although some management actions will result in strong tradeoffs, others may enhance multiple services simultaneously. Management of estuarine ecosystems could be improved by distinguishing between optimal management actions for enhancing multiple services and those that have severe tradeoffs. This requires a framework that evaluates tradeoff scenarios and identifies management actions likely to benefit multiple services. We created a management action-services matrix as a first step towards assessing tradeoffs and providing managers with a decision support tool. We found that management actions that restored or enhanced natural vegetation (e.g., salt marsh and mangroves) and some shellfish (particularly oysters and oyster reef habitat) benefited multiple services. In contrast, management actions such as desalination, salt pond creation, sand mining, and large container shipping had large net negative effects on several of the other services considered in the matrix. Our framework provides resource managers a simple way to inform EBM decisions and can also be used as a first step in more sophisticated approaches that model service delivery.

  15. The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.

    PubMed

    Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou

    2017-11-01

    Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.

  16. Graded Multiple Choice Questions: Rewarding Understanding and Preventing Plagiarism

    NASA Astrophysics Data System (ADS)

    Denyer, G. S.; Hancock, D.

    2002-08-01

    This paper describes an easily implemented method that allows the generation and analysis of graded multiple-choice examinations. The technique, which uses standard functions in user-end software (Microsoft Excel 5+), can also produce several different versions of an examination that can be employed to prevent the reward of plagarism. The manuscript also discusses the advantages of having a graded marking system for the elimination of ambiguities, use in multi-step calculation questions, and questions that require extrapolation or reasoning. The advantages of the scrambling strategy, which maintains the same question order, is discussed with reference to student equity. The system provides a non-confrontational mechanism for dealing with cheating in large-class multiple-choice examinations, as well as providing a reward for problem solving over surface learning.

  17. Design and Implementation of nine level multilevel Inverter

    NASA Astrophysics Data System (ADS)

    Dhineshkumar, K.; Subramani, C.

    2018-04-01

    In this paper the solar based boost converter integrated Nine level multilevel inverter presented. It uses 7 switches to produce nine level output stepped waveform. The aim of the work to produce 9 level wave form using solar and boost converter. The conventional inverter has multiple sources and has 16 switches are required and also more number of voltage sources required. The proposed inverter required single solar panel and reduced number of switches and integrated boost converter which increase the input voltage of the inverter. The proposed inverter simulated and compared with R load using Mat lab and prototype model experimentally verified. The proposed inverter can be used in n number of solar applications.

  18. Implementing Immediate Postpartum Long-Acting Reversible Contraception Programs.

    PubMed

    Hofler, Lisa G; Cordes, Sarah; Cwiak, Carrie A; Goedken, Peggy; Jamieson, Denise J; Kottke, Melissa

    2017-01-01

    To understand the most important steps required to implement immediate postpartum long-acting reversible contraception (LARC) programs in different Georgia hospitals and the barriers to implementing such a program. This was a qualitative study. We interviewed 32 key personnel from 10 Georgia hospitals working to establish immediate postpartum LARC programs. Data were analyzed using directed qualitative content analysis principles. We used the Stages of Implementation to organize participant-identified key steps for immediate postpartum LARC into an implementation guide. We compared this guide to hospitals' implementation experiences. At the completion of the study, LARC was available for immediate postpartum placement at 7 of 10 study hospitals. Participants identified common themes for the implementation experience: team member identification and ongoing communication, payer preparedness challenges, interdependent department-specific tasks, and piloting with continuing improvements. Participants expressed a need for anticipatory guidance throughout the process. Key first steps to immediate postpartum LARC program implementation were identifying project champions, creating an implementation team that included all relevant departments, obtaining financial reassurance, and ensuring hospital administration awareness of the project. Potential barriers included lack of knowledge about immediate postpartum LARC, financial concerns, and competing clinical and administrative priorities. Hospitals that were successful at implementing immediate postpartum LARC programs did so by prioritizing clear communication and multidisciplinary teamwork. Although the implementation guide reflects a comprehensive assessment of the steps to implementing immediate postpartum LARC programs, not all hospitals required every step to succeed. Hospital teams report that implementing immediate postpartum LARC programs involves multiple departments and a number of important steps to consider. A stage-based approach to implementation, and a standardized guide detailing these steps, may provide the necessary structure for the complex process of implementing immediate postpartum LARC programs in the hospital setting.

  19. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  20. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  1. Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.

    PubMed

    Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro

    2018-06-13

    The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.

  2. Method for exponentiating in cryptographic systems

    DOEpatents

    Brickell, Ernest F.; Gordon, Daniel M.; McCurley, Kevin S.

    1994-01-01

    An improved cryptographic method utilizing exponentiation is provided which has the advantage of reducing the number of multiplications required to determine the legitimacy of a message or user. The basic method comprises the steps of selecting a key from a preapproved group of integer keys g; exponentiating the key by an integer value e, where e represents a digital signature, to generate a value g.sup.e ; transmitting the value g.sup.e to a remote facility by a communications network; receiving the value g.sup.e at the remote facility; and verifying the digital signature as originating from the legitimate user. The exponentiating step comprises the steps of initializing a plurality of memory locations with a plurality of values g.sup.xi ; computi The United States Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the Department of Energy and AT&T Company.

  3. A nursing-specific model of EPR documentation: organizational and professional requirements.

    PubMed

    von Krogh, Gunn; Nåden, Dagfinn

    2008-01-01

    To present the Norwegian documentation KPO model (quality assurance, problem solving, and caring). To present the requirements and multiple electronic patient record (EPR) functions the model is designed to address. The model's professional substance, a conceptual framework for nursing practice is developed by examining, reorganizing, and completing existing frameworks. The model's methodology, an information management system, is developed using an expert group. Both model elements were clinically tested over a period of 1 year. The model is designed for nursing documentation in step with statutory, organizational, and professional requirements. Complete documentation is arranged for by incorporating the Nursing Minimum Data Set. A systematic and comprehensive documentation is arranged for by establishing categories as provided in the model's framework domains. Consistent documentation is arranged for by incorporating NANDA-I Nursing Diagnoses, Nursing Intervention Classification, and Nursing Outcome Classification. The model can be used as a tool in cooperation with vendors to ensure the interests of the nursing profession is met when developing EPR solutions in healthcare. The model can provide clinicians with a framework for documentation in step with legal and organizational requirements and at the same time retain the ability to record all aspects of clinical nursing.

  4. Contact formation and gettering of precipitated impurities by multiple firing during semiconductor device fabrication

    DOEpatents

    Sopori, Bhushan

    2014-05-27

    Methods for contact formation and gettering of precipitated impurities by multiple firing during semiconductor device fabrication are provided. In one embodiment, a method for fabricating an electrical semiconductor device comprises: a first step that includes gettering of impurities from a semiconductor wafer and forming a backsurface field; and a second step that includes forming a front contact for the semiconductor wafer, wherein the second step is performed after completion of the first step.

  5. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  6. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  7. RNA helicase MOV10 functions as a co-factor of HIV-1 Rev to facilitate Rev/RRE-dependent nuclear export of viral mRNAs.

    PubMed

    Huang, Feng; Zhang, Junsong; Zhang, Yijun; Geng, Guannan; Liang, Juanran; Li, Yingniang; Chen, Jingliang; Liu, Chao; Zhang, Hui

    2015-12-01

    Human immunodeficiency virus type 1 (HIV-1) exploits multiple host factors during its replication. The REV/RRE-dependent nuclear export of unspliced/partially spliced viral transcripts needs the assistance of host proteins. Recent studies have shown that MOV10 overexpression inhibited HIV-1 replication at various steps. However, the endogenous MOV10 was required in certain step(s) of HIV-1 replication. In this report, we found that MOV10 potently enhances the nuclear export of viral mRNAs and subsequently increases the expression of Gag protein and other late products through affecting the Rev/RRE axis. The co-immunoprecipitation analysis indicated that MOV10 interacts with Rev in an RNA-independent manner. The DEAG-box of MOV10 was required for the enhancement of Rev/RRE-dependent nuclear export and the DEAG-box mutant showed a dominant-negative activity. Our data propose that HIV-1 utilizes the anti-viral factor MOV10 to function as a co-factor of Rev and demonstrate the complicated effects of MOV10 on HIV-1 life cycle. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  9. Autoantigen La promotes efficient RNAi, antiviral response, and transposon silencing by facilitating multiple-turnover RISC catalysis

    PubMed Central

    Liu, Ying; Tan, Huiling; Tian, Hui; Liang, Chunyang; Chen, She; Liu, Qinghua

    2011-01-01

    SUMMARY The effector of RNA interference (RNAi) is the RNA-induced silencing complex (RISC). C3PO promotes the activation of RISC by degrading Argonaute2 (Ago2)-nicked passenger strand of duplex siRNA. Active RISC is a multiple-turnover enzyme that uses the guide strand of siRNA to direct Ago2-mediated sequence-specific cleavage of complementary mRNA. How this effector step of RNAi is regulated is currently unknown. Here, we used human Ago2 minimal RISC system to purify Sjögren’s syndrome antigen B (SSB)/autoantigen La as an activator of the RISC-mediated mRNA cleavage activity. Our reconstitution studies showed that La could promote multiple-turnover RISC catalysis by facilitating the release of cleaved mRNA from RISC. Moreover, we demonstrated that La was required for efficient RNAi, antiviral defense, and transposon silencing in vivo. Taken together, the findings of C3PO and La reveal a general concept that regulatory factors are required to remove Ago2-cleaved products to assemble or restore active RISC. PMID:22055194

  10. The performance analysis of three-dimensional track-before-detect algorithm based on Fisher-Tippett-Gnedenko theorem

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan

    2016-09-01

    The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.

  11. Reactive stepping behaviour in response to forward loss of balance predicts future falls in community-dwelling older adults.

    PubMed

    Carty, Christopher P; Cronin, Neil J; Nicholson, Deanne; Lichtwark, Glen A; Mills, Peter M; Kerr, Graham; Cresswell, Andrew G; Barrett, Rod S

    2015-01-01

    a fall occurs when an individual experiences a loss of balance from which they are unable to recover. Assessment of balance recovery ability in older adults may therefore help to identify individuals at risk of falls. The purpose of this 12-month prospective study was to assess whether the ability to recover from a forward loss of balance with a single step across a range of lean magnitudes was predictive of falls. two hundred and one community-dwelling older adults, aged 65-90 years, underwent baseline testing of sensori-motor function and balance recovery ability followed by 12-month prospective falls evaluation. Balance recovery ability was defined by whether participants required either single or multiple steps to recover from forward loss of balance from three lean magnitudes, as well as the maximum lean magnitude participants could recover from with a single step. forty-four (22%) participants experienced one or more falls during the follow-up period. Maximal recoverable lean magnitude and use of multiple steps to recover at the 15% body weight (BW) and 25%BW lean magnitudes significantly predicted a future fall (odds ratios 1.08-1.26). The Physiological Profile Assessment, an established tool that assesses variety of sensori-motor aspects of falls risk, was also predictive of falls (Odds ratios 1.22 and 1.27, respectively), whereas age, sex, postural sway and timed up and go were not predictive. reactive stepping behaviour in response to forward loss of balance and physiological profile assessment are independent predictors of a future fall in community-dwelling older adults. Exercise interventions designed to improve reactive stepping behaviour may protect against future falls. © The Author 2014. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. DDX41 Recognizes RNA/DNA Retroviral Reverse Transcripts and Is Critical for In Vivo Control of Murine Leukemia Virus Infection.

    PubMed

    Stavrou, Spyridon; Aguilera, Alexya N; Blouch, Kristin; Ross, Susan R

    2018-06-05

    Host recognition of viral nucleic acids generated during infection leads to the activation of innate immune responses essential for early control of virus. Retrovirus reverse transcription creates numerous potential ligands for cytosolic host sensors that recognize foreign nucleic acids, including single-stranded RNA (ssRNA), RNA/DNA hybrids, and double-stranded DNA (dsDNA). We and others recently showed that the sensors cyclic GMP-AMP synthase (cGAS), DEAD-box helicase 41 (DDX41), and members of the Aim2-like receptor (ALR) family participate in the recognition of retroviral reverse transcripts. However, why multiple sensors might be required and their relative importance in in vivo control of retroviral infection are not known. Here, we show that DDX41 primarily senses the DNA/RNA hybrid generated at the first step of reverse transcription, while cGAS recognizes dsDNA generated at the next step. We also show that both DDX41 and cGAS are needed for the antiretroviral innate immune response to murine leukemia virus (MLV) and HIV in primary mouse macrophages and dendritic cells (DCs). Using mice with cell type-specific knockout of the Ddx41 gene, we show that DDX41 sensing in DCs but not macrophages was critical for controlling in vivo MLV infection. This suggests that DCs are essential in vivo targets for infection, as well as for initiating the antiviral response. Our work demonstrates that the innate immune response to retrovirus infection depends on multiple host nucleic acid sensors that recognize different reverse transcription intermediates. IMPORTANCE Viruses are detected by many different host sensors of nucleic acid, which in turn trigger innate immune responses, such as type I interferon (IFN) production, required to control infection. We show here that at least two sensors are needed to initiate a highly effective innate immune response to retroviruses-DDX41, which preferentially senses the RNA/DNA hybrid generated at the first step of retrovirus replication, and cGAS, which recognizes double-stranded DNA generated at the second step. Importantly, we demonstrate using mice lacking DDX41 or cGAS that both sensors are needed for the full antiviral response needed to control in vivo MLV infection. These findings underscore the need for multiple host factors to counteract retroviral infection. Copyright © 2018 Stavrou et al.

  13. Can Reduced-Step Polishers Be as Effective as Multiple-Step Polishers in Enhancing Surface Smoothness?

    PubMed

    Kemaloglu, Hande; Karacolak, Gamze; Turkun, L Sebnem

    2017-02-01

    The aim of this study was to evaluate the effects of various finishing and polishing systems on the final surface roughness of a resin composite. Hypotheses tested were: (1) reduced-step polishing systems are as effective as multiple-step systems on reducing the surface roughness of a resin composite and (2) the number of application steps in an F/P system has no effect on reducing surface roughness. Ninety discs of a nano-hybrid resin composite were fabricated and divided into nine groups (n = 10). Except the control, all of the specimens were roughened prior to be polished by: Enamel Plus Shiny, Venus Supra, One-gloss, Sof-Lex Wheels, Super-Snap, Enhance/PoGo, Clearfil Twist Dia, and rubber cups. The surface roughness was measured and the surfaces were examined under scanning electron microscope. Results were analyzed with analysis of variance and Holm-Sidak's multiple comparisons test (p < 0.05). Significant differences were found among the surface roughness of all groups (p < 0.05). The smoothest surfaces were obtained under Mylar strips and the results were not different than Super-Snap, Enhance/PoGo, and Sof-Lex Spiral Wheels. The group that showed the roughest surface was the rubber cup group and these results were similar to those of the One-gloss, Enamel Plus Shiny, and Venus Supra groups. (1) The number of application steps has no effect on the performance of F/P systems. (2) Reduced-step polishers used after a finisher can be preferable to multiple-step systems when used on nanohybrid resin composites. (3) The effect of F/P systems on surface roughness seems to be material-dependent rather than instrument- or system-dependent. Reduced-step systems used after a prepolisher can be an acceptable alternative to multiple-step systems on enhancing the surface smoothness of a nanohybrid composite; however, their effectiveness depends on the materials' properties. (J Esthet Restor Dent 29:31-40, 2017). © 2016 Wiley Periodicals, Inc.

  14. Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment

    ERIC Educational Resources Information Center

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this…

  15. Effect of handpiece maintenance method on bond strength.

    PubMed

    Roberts, Howard W; Vandewalle, Kraig S; Charlton, David G; Leonard, Daniel L

    2005-01-01

    This study evaluated the effect of dental handpiece lubricant on the shear bond strength of three bonding agents to dentin. A lubrication-free handpiece (one that does not require the user to lubricate it) and a handpiece requiring routine lubrication were used in the study. In addition, two different handpiece lubrication methods (automated versus manual application) were also investigated. One hundred and eighty extracted human teeth were ground to expose flat dentin surfaces that were then finished with wet silicon carbide paper. The teeth were randomly divided into 18 groups (n=10). The dentin surface of each specimen was exposed for 30 seconds to water spray from either a lubrication-free handpiece or a lubricated handpiece. Prior to exposure, various lubrication regimens were used on the handpieces that required lubrication. The dentin surfaces were then treated with total-etch, two-step; a self-etch, two-step or a self-etch, one-step bonding agent. Resin composite cylinders were bonded to dentin, the specimens were then thermocycled and tested to failure in shear at seven days. Mean bond strength data were analyzed using Dunnett's multiple comparison test at an 0.05 level of significance. Results indicated that within each of the bonding agents, there were no significant differences in bond strength between the control group and the treatment groups regardless of the type of handpiece or use of routine lubrication.

  16. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets

    PubMed Central

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-01-01

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation. PMID:26182891

  17. A simple and inexpensive on-column frit fabrication method for fused-silica capillaries for increased capacity and versatility in LC-MS/MS applications.

    PubMed

    Wang, Ling-Chi; Okitsu, Cindy Yen; Kochounian, Harold; Rodriguez, Anthony; Hsieh, Chih-Lin; Zandi, Ebrahim

    2008-05-01

    A modified sol-gel method for a one-step on-column frit preparation for fused-silica capillaries and its utility for peptide separation in LC-MS/MS is described. This method is inexpensive, reproducible, and does not require specialized equipments. Because the frit fabrication process does not damage polyimide coating, the frit-fabricated column can be tightly connected on-line for high pressure LC. These columns can replace any capillary liquid transfer tubing without any specialized connections up-stream of a spray tip column. Therefore multiple columns with different phases can be connected in series for one- or multiple-dimensional chromatography.

  18. Neural Correlates of Temporal Credit Assignment in the Parietal Lobe

    PubMed Central

    Eisenberg, Ian; Gottlieb, Jacqueline

    2014-01-01

    Empirical studies of decision making have typically assumed that value learning is governed by time, such that a reward prediction error arising at a specific time triggers temporally-discounted learning for all preceding actions. However, in natural behavior, goals must be acquired through multiple actions, and each action can have different significance for the final outcome. As is recognized in computational research, carrying out multi-step actions requires the use of credit assignment mechanisms that focus learning on specific steps, but little is known about the neural correlates of these mechanisms. To investigate this question we recorded neurons in the monkey lateral intraparietal area (LIP) during a serial decision task where two consecutive eye movement decisions led to a final reward. The underlying decision trees were structured such that the two decisions had different relationships with the final reward, and the optimal strategy was to learn based on the final reward at one of the steps (the “F” step) but ignore changes in this reward at the remaining step (the “I” step). In two distinct contexts, the F step was either the first or the second in the sequence, controlling for effects of temporal discounting. We show that LIP neurons had the strongest value learning and strongest post-decision responses during the transition after the F step regardless of the serial position of this step. Thus, the neurons encode correlates of temporal credit assignment mechanisms that allocate learning to specific steps independently of temporal discounting. PMID:24523935

  19. An Improved Single-Step Cloning Strategy Simplifies the Agrobacterium tumefaciens-Mediated Transformation (ATMT)-Based Gene-Disruption Method for Verticillium dahliae.

    PubMed

    Wang, Sheng; Xing, Haiying; Hua, Chenlei; Guo, Hui-Shan; Zhang, Jie

    2016-06-01

    The soilborne fungal pathogen Verticillium dahliae infects a broad range of plant species to cause severe diseases. The availability of Verticillium genome sequences has provided opportunities for large-scale investigations of individual gene function in Verticillium strains using Agrobacterium tumefaciens-mediated transformation (ATMT)-based gene-disruption strategies. Traditional ATMT vectors require multiple cloning steps and elaborate characterization procedures to achieve successful gene replacement; thus, these vectors are not suitable for high-throughput ATMT-based gene deletion. Several advancements have been made that either involve simplification of the steps required for gene-deletion vector construction or increase the efficiency of the technique for rapid recombinant characterization. However, an ATMT binary vector that is both simple and efficient is still lacking. Here, we generated a USER-ATMT dual-selection (DS) binary vector, which combines both the advantages of the USER single-step cloning technique and the efficiency of the herpes simplex virus thymidine kinase negative-selection marker. Highly efficient deletion of three different genes in V. dahliae using the USER-ATMT-DS vector enabled verification that this newly-generated vector not only facilitates the cloning process but also simplifies the subsequent identification of fungal homologous recombinants. The results suggest that the USER-ATMT-DS vector is applicable for efficient gene deletion and suitable for large-scale gene deletion in V. dahliae.

  20. Keeping a Step Ahead: formative phase of a workplace intervention trial to prevent obesity.

    PubMed

    Zapka, Jane; Lemon, Stephenie C; Estabrook, Barbara B; Jolicoeur, Denise G

    2007-11-01

    Ecological interventions hold promise for promoting overweight and obesity prevention in worksites. Given the paucity of evaluative research in the hospital worksite setting, considerable formative work is required for successful implementation and evaluation. This paper describes the formative phases of Step Ahead, a site-randomized controlled trial of a multilevel intervention that promotes physical activity and healthy eating in six hospitals in central Massachusetts. The purpose of the formative research phase was to increase the feasibility, effectiveness, and likelihood of sustainability of the intervention. The Step Ahead ecological intervention approach targets change at the organization, interpersonal work environment, and individual levels. The intervention was developed using fundamental steps of intervention mapping and important tenets of participatory research. Formative research methods were used to engage leadership support and assistance and to develop an intervention plan that is both theoretically and practically grounded. This report uses observational data, program minutes and reports, and process tracking data. Leadership involvement (key informant interviews and advisory boards), employee focus groups and advisory boards, and quantitative environmental assessments cultivated participation and support. Determining multiple foci of change and designing measurable objectives and generic assessment tools to document progress are complex challenges encountered in planning phases. Multilevel trials in diverse organizations require flexibility and balance of theory application and practice-based perspectives to affect impact and outcome objectives. Formative research is an essential component.

  1. Workflow Management for Complex HEP Analyses

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, R.; Rieger, M.; von Cube, R. F.

    2017-10-01

    We present the novel Analysis Workflow Management (AWM) that provides users with the tools and competences of professional large scale workflow systems, e.g. Apache’s Airavata[1]. The approach presents a paradigm shift from executing parts of the analysis to defining the analysis. Within AWM an analysis consists of steps. For example, a step defines to run a certain executable for multiple files of an input data collection. Each call to the executable for one of those input files can be submitted to the desired run location, which could be the local computer or a remote batch system. An integrated software manager enables automated user installation of dependencies in the working directory at the run location. Each execution of a step item creates one report for bookkeeping purposes containing error codes and output data or file references. Required files, e.g. created by previous steps, are retrieved automatically. Since data storage and run locations are exchangeable from the steps perspective, computing resources can be used opportunistically. A visualization of the workflow as a graph of the steps in the web browser provides a high-level view on the analysis. The workflow system is developed and tested alongside of a ttbb cross section measurement where, for instance, the event selection is represented by one step and a Bayesian statistical inference is performed by another. The clear interface and dependencies between steps enables a make-like execution of the whole analysis.

  2. Computation of Transonic Nozzle Sound Transmission and Rotor Problems by the Dispersion-Relation-Preserving Scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Aganin, Alexei

    2000-01-01

    The transonic nozzle transmission problem and the open rotor noise radiation problem are solved computationally. Both are multiple length scales problems. For efficient and accurate numerical simulation, the multiple-size-mesh multiple-time-step Dispersion-Relation-Preserving scheme is used to calculate the time periodic solution. To ensure an accurate solution, high quality numerical boundary conditions are also needed. For the nozzle problem, a set of nonhomogeneous, outflow boundary conditions are required. The nonhomogeneous boundary conditions not only generate the incoming sound waves but also, at the same time, allow the reflected acoustic waves and entropy waves, if present, to exit the computation domain without reflection. For the open rotor problem, there is an apparent singularity at the axis of rotation. An analytic extension approach is developed to provide a high quality axis boundary treatment.

  3. Method for network analyzation and apparatus

    DOEpatents

    Bracht, Roger B.; Pasquale, Regina V.

    2001-01-01

    A portable network analyzer and method having multiple channel transmit and receive capability for real-time monitoring of processes which maintains phase integrity, requires low power, is adapted to provide full vector analysis, provides output frequencies of up to 62.5 MHz and provides fine sensitivity frequency resolution. The present invention includes a multi-channel means for transmitting and a multi-channel means for receiving, both in electrical communication with a software means for controlling. The means for controlling is programmed to provide a signal to a system under investigation which steps consecutively over a range of predetermined frequencies. The resulting received signal from the system provides complete time domain response information by executing a frequency transform of the magnitude and phase information acquired at each frequency step.

  4. Single-step methods for predicting orbital motion considering its periodic components

    NASA Astrophysics Data System (ADS)

    Lavrov, K. N.

    1989-01-01

    Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.

  5. Engineering metabolic pathways in plants by multigene transformation.

    PubMed

    Zorrilla-López, Uxue; Masip, Gemma; Arjó, Gemma; Bai, Chao; Banakar, Raviraj; Bassie, Ludovic; Berman, Judit; Farré, Gemma; Miralpeix, Bruna; Pérez-Massot, Eduard; Sabalza, Maite; Sanahuja, Georgina; Vamvaka, Evangelia; Twyman, Richard M; Christou, Paul; Zhu, Changfu; Capell, Teresa

    2013-01-01

    Metabolic engineering in plants can be used to increase the abundance of specific valuable metabolites, but single-point interventions generally do not improve the yields of target metabolites unless that product is immediately downstream of the intervention point and there is a plentiful supply of precursors. In many cases, an intervention is necessary at an early bottleneck, sometimes the first committed step in the pathway, but is often only successful in shifting the bottleneck downstream, sometimes also causing the accumulation of an undesirable metabolic intermediate. Occasionally it has been possible to induce multiple genes in a pathway by controlling the expression of a key regulator, such as a transcription factor, but this strategy is only possible if such master regulators exist and can be identified. A more robust approach is the simultaneous expression of multiple genes in the pathway, preferably representing every critical enzymatic step, therefore removing all bottlenecks and ensuring completely unrestricted metabolic flux. This approach requires the transfer of multiple enzyme-encoding genes to the recipient plant, which is achieved most efficiently if all genes are transferred at the same time. Here we review the state of the art in multigene transformation as applied to metabolic engineering in plants, highlighting some of the most significant recent advances in the field.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Robert; McConnell, Elizabeth

    Machining methods across many industries generally require multiple operations to machine and process advanced materials, features with micron precision, and complex shapes. The resulting multiple machining platforms can significantly affect manufacturing cycle time and the precision of the final parts, with a resultant increase in cost and energy consumption. Ultrafast lasers represent a transformative and disruptive technology that removes material with micron precision and in a single step manufacturing process. Such precision results from athermal ablation without modification or damage to the remaining material which is the key differentiator between ultrafast laser technologies and traditional laser technologies or mechanical processes.more » Athermal ablation without modification or damage to the material eliminates post-processing or multiple manufacturing steps. Combined with the appropriate technology to control the motion of the work piece, ultrafast lasers are excellent candidates to provide breakthrough machining capability for difficult-to-machine materials. At the project onset in early 2012, the project team recognized that substantial effort was necessary to improve the application of ultrafast laser and precise motion control technologies (for micromachining difficult-to-machine materials) to further the aggregate throughput and yield improvements over conventional machining methods. The project described in this report advanced these leading-edge technologies thru the development and verification of two platforms: a hybrid enhanced laser chassis and a multi-application testbed.« less

  7. A clinical test of stepping and change of direction to identify multiple falling older adults.

    PubMed

    Dite, Wayne; Temple, Viviene A

    2002-11-01

    To establish the reliability and validity of a new clinical test of dynamic standing balance, the Four Square Step Test (FSST), to evaluate its sensitivity, specificity, and predictive value in identifying subjects who fall, and to compare it with 3 established balance and mobility tests. A 3-group comparison performed by using 3 validated tests and 1 new test. A rehabilitation center and university medical school in Australia. Eighty-one community-dwelling adults over the age of 65 years. Subjects were age- and gender-matched to form 3 groups: multiple fallers, nonmultiple fallers, and healthy comparisons. Not applicable. Time to complete the FSST and Timed Up and Go test and the number of steps to complete the Step Test and Functional Reach Test distance. High reliability was found for interrater (n=30, intraclass correlation coefficient [ICC]=.99) and retest reliability (n=20, ICC=.98). Evidence for validity was found through correlation with other existing balance tests. Validity was supported, with the FSST showing significantly better performance scores (P<.01) for each of the healthier and less impaired groups. The FSST also revealed a sensitivity of 85%, a specificity of 88% to 100%, and a positive predictive value of 86%. As a clinical test, the FSST is reliable, valid, easy to score, quick to administer, requires little space, and needs no special equipment. It is unique in that it involves stepping over low objects (2.5cm) and movement in 4 directions. The FSST had higher combined sensitivity and specificity for identifying differences between groups in the selected sample population of older adults than the 3 tests with which it was compared. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  8. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  9. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  10. Exponential Megapriming PCR (EMP) Cloning—Seamless DNA Insertion into Any Target Plasmid without Sequence Constraints

    PubMed Central

    Ulrich, Alexander; Andersen, Kasper R.; Schwartz, Thomas U.

    2012-01-01

    We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts. PMID:23300917

  11. Exponential megapriming PCR (EMP) cloning--seamless DNA insertion into any target plasmid without sequence constraints.

    PubMed

    Ulrich, Alexander; Andersen, Kasper R; Schwartz, Thomas U

    2012-01-01

    We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts.

  12. Multi-scale connectivity and graph theory highlight critical areas for conservation under climate change

    USGS Publications Warehouse

    Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.

    2016-01-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.

  13. Preliminary Analysis of Low-Thrust Gravity Assist Trajectories by An Inverse Method and a Global Optimization Technique.

    NASA Astrophysics Data System (ADS)

    de Pascale, P.; Vasile, M.; Casotto, S.

    The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.

  14. Overlapping MALDI-Mass Spectrometry Imaging for In-Parallel MS and MS/MS Data Acquisition without Sacrificing Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Hansen, Rebecca L.; Lee, Young Jin

    2017-09-01

    Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.

  15. Effect of film-based versus filmless operation on the productivity of CT technologists.

    PubMed

    Reiner, B I; Siegel, E L; Hooper, F J; Glasser, D

    1998-05-01

    To determine the relative time required for a technologist to perform a computed tomographic (CT) examination in a "filmless" versus a film-based environment. Time-motion studies were performed in 204 consecutive CT examinations. Images from 96 examinations were electronically transferred to a picture archiving and communication system (PACS) without being printed to film, and 108 were printed to film. The time required to obtain and electronically transfer the images or print the images to film and make the current and previous studies available to the radiologists for interpretation was recorded. The time required for a technologist to complete a CT examination was reduced by 45% with direct image transfer to the PACS compared with the time required in the film-based mode. This reduction was due to the elimination of a number of steps in the filming process, such as the printing at multiple window or level settings. The use of a PACS can result in the elimination of multiple time-intensive tasks for the CT technologist, resulting in a marked reduction in examination time. This reduction can result in increased productivity, and, hence greater cost-effectiveness with filmless operation.

  16. GATA-3 is required for early T lineage progenitor development

    PubMed Central

    Hosoya, Tomonori; Kuroha, Takashi; Moriguchi, Takashi; Cummings, Dustin; Maillard, Ivan; Lim, Kim-Chew

    2009-01-01

    Most T lymphocytes appear to arise from very rare early T lineage progenitors (ETPs) in the thymus, but the transcriptional programs that specify ETP generation are not completely known. The transcription factor GATA-3 is required for the development of T lymphocytes at multiple late differentiation steps as well as for the development of thymic natural killer cells. However, a role for GATA-3 before the double-negative (DN) 3 stage of T cell development has to date been obscured both by the developmental heterogeneity of DN1 thymocytes and the paucity of ETPs. We provide multiple lines of in vivo evidence through the analysis of T cell development in Gata3 hypomorphic mutant embryos, in irradiated mice reconstituted with Gata3 mutant hematopoietic cells, and in mice conditionally ablated for the Gata3 gene to show that GATA-3 is required for ETP generation. We further show that Gata3 loss does not affect hematopoietic stem cells or multipotent hematopoietic progenitors. Finally, we demonstrate that Gata3 mutant lymphoid progenitors exhibit neither increased apoptosis nor diminished cell-cycle progression. Thus, GATA-3 is required for the cell-autonomous development of the earliest characterized thymic T cell progenitors. PMID:19934022

  17. Meningiomatosis restricted to the left cerebral hemisphere with acute clinical deterioration: Case presentation and discussion of treatment options.

    PubMed

    Ohla, Victoria; Scheiwe, Christian

    2015-01-01

    True multiple meningiomas are defined as meningiomas occurring at several intracranial locations simultaneously without the presence of neurofibromatosis. Though the prognosis does not differ from benign solitary meningiomas, the simultaneous occurrence of different grades of malignancy has been reported in one-third of patients with multiple meningiomas. Due to its rarity, unclear etiology, and questions related to proper management, we are presenting our case of meningiomatosis and discuss possible pathophysiological mechanisms. We illustrate the case of a 55-year-old female with multiple meningothelial meningeomas exclusively located in the left cerebral hemisphere. The patient presented with acute vigilance decrement, aphasia, and vomiting. Further deterioration with sopor and nondirectional movements required oral intubation. Emergent magnetic resonance imaging (MRI) with MR-angiography disclosed a massive midline shift to the right due to widespread, plaque-like lesions suspicious for meningeomatosis, purely restricted to the left cerebral hemisphere. Emergency partial tumor resection was performed. Postoperative computed tomography (CT) scan showed markedly reduction of cerebral edema and midline shift. After tapering the sedation a right-sided hemiparesis resolved within 2 weeks, leaving the patient neurologically intact. Although multiple meningeomas are reported frequently, the presence of meningeomatosis purely restricted to one cerebral hemisphere is very rare. As with other accessible and symptomatic lesions, the treatment of choice is complete resection with clean margins to avoid local recurrence. In case of widespread distribution a step-by-step resection with the option of postoperative radiation of tumor remnants may be an option.

  18. A graph-based approach for designing extensible pipelines

    PubMed Central

    2012-01-01

    Background In bioinformatics, it is important to build extensible and low-maintenance systems that are able to deal with the new tools and data formats that are constantly being developed. The traditional and simplest implementation of pipelines involves hardcoding the execution steps into programs or scripts. This approach can lead to problems when a pipeline is expanding because the incorporation of new tools is often error prone and time consuming. Current approaches to pipeline development such as workflow management systems focus on analysis tasks that are systematically repeated without significant changes in their course of execution, such as genome annotation. However, more dynamism on the pipeline composition is necessary when each execution requires a different combination of steps. Results We propose a graph-based approach to implement extensible and low-maintenance pipelines that is suitable for pipeline applications with multiple functionalities that require different combinations of steps in each execution. Here pipelines are composed automatically by compiling a specialised set of tools on demand, depending on the functionality required, instead of specifying every sequence of tools in advance. We represent the connectivity of pipeline components with a directed graph in which components are the graph edges, their inputs and outputs are the graph nodes, and the paths through the graph are pipelines. To that end, we developed special data structures and a pipeline system algorithm. We demonstrate the applicability of our approach by implementing a format conversion pipeline for the fields of population genetics and genetic epidemiology, but our approach is also helpful in other fields where the use of multiple software is necessary to perform comprehensive analyses, such as gene expression and proteomics analyses. The project code, documentation and the Java executables are available under an open source license at http://code.google.com/p/dynamic-pipeline. The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675

  19. What is a wiki, and how can it be used in resident education?

    PubMed

    Kohli, Marc D; Bradshaw, John K

    2011-02-01

    Training as a radiology resident is a complex task. Residents frequently encounter multiple hospital systems, each with unique workflow patterns and heterogenous information systems. We identified an opportunity to ease some of the resulting anxiety and frustration by centralizing high-quality resources using a wiki. In this manuscript, we describe our choice of wiki software, give basic information about hardware requirements, detail steps for configuration, outline information included on the wiki, and present the results of a resident acceptance survey.

  20. Rapid construction of capsid-modified adenoviral vectors through bacteriophage lambda Red recombination.

    PubMed

    Campos, Samuel K; Barry, Michael A

    2004-11-01

    There are extensive efforts to develop cell-targeting adenoviral vectors for gene therapy wherein endogenous cell-binding ligands are ablated and exogenous ligands are introduced by genetic means. Although current approaches can genetically manipulate the capsid genes of adenoviral vectors, these approaches can be time-consuming and require multiple steps to produce a modified viral genome. We present here the use of the bacteriophage lambda Red recombination system as a valuable tool for the easy and rapid construction of capsid-modified adenoviral genomes.

  1. A Multiple Time-Step Finite State Projection Algorithm for the Solution to the Chemical Master Equation

    DTIC Science & Technology

    2006-11-30

    except in the simplest of circumstances. This belief has driven the com- putational research community to devise clever kinetic Monte Carlo ( KMC ... KMC rou- tine is very slow; cutting the error in half requires four times the number of simulations. Since a single simulation may contain huge numbers...subintervals [9–14]. Both approximation types, system partitioning and τ leaping, have been very successful in increasing the scope of problems to which KMC

  2. The visual display of regulatory information and networks.

    PubMed

    Pirson, I; Fortemaison, N; Jacobs, C; Dremier, S; Dumont, J E; Maenhaut, C

    2000-10-01

    Cell regulation and signal transduction are becoming increasingly complex, with reports of new cross-signalling, feedback, and feedforward regulations between pathways and between the multiple isozymes discovered at each step of these pathways. However, this information, which requires pages of text for its description, can be summarized in very simple schemes, although there is no consensus on the drawing of such schemes. This article presents a simple set of rules that allows a lot of information to be inserted in easily understandable displays.

  3. A Study on Segmented Multiple-Step Forming of Doubly Curved Thick Plate by Reconfigurable Multi-Punch Dies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Young Ho; Han, Myoung Soo; Han, Jong Man

    2007-05-17

    Doubly curved thick plate forming in shipbuilding industries is currently performed by a thermal forming process, called as Line Heating by using gas flame torches. Due to the empirical manual work of it, the industries are eager for an alternative way to manufacture curved thick plates for ships. It was envisaged in this study to manufacture doubly curved thick plates by the multi-punch die forming. Experiments and finite element analyses were conducted to evaluate the feasibility of the reconfigurable discrete die forming to the thick plates. Single and segmented multiple step forming procedures were considered from both forming efficiency andmore » accuracy. Configuration of the multi-punch dies suitable for the segmented multiple step forming was also explored. As a result, Segmented multiple step forming with matched dies had a limited formability when the objective shapes become complicate, while a unmatched die configuration provided better possibility to manufacture large curved plates for ships.« less

  4. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  5. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    NASA Astrophysics Data System (ADS)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.

  6. Lower limb muscle moments and power during recovery from forward loss of balance in male and female single and multiple steppers.

    PubMed

    Carty, Christopher P; Cronin, Neil J; Lichtwark, Glen A; Mills, Peter M; Barrett, Rod S

    2012-12-01

    Studying recovery responses to loss of balance may help to explain why older adults are susceptible to falls. The purpose of the present study was to assess whether male and female older adults, that use a single or multiple step recovery strategy, differ in the proportion of lower limb strength used and power produced during the stepping phase of balance recovery. Eighty-four community-dwelling older adults (47 men, 37 women) participated in the study. Isometric strength of the ankle, knee and hip joint flexors and extensors was assessed using a dynamometer. Loss of balance was induced by releasing participants from a static forward lean (4 trials at each of 3 forward lean angles). Participants were instructed to recover with a single step and were subsequently classified as using a single or multiple step recovery strategy for each trial. (1) Females were weaker than males and the proportion of females that were able to recover with a single step were lower than for males at each lean magnitude. (2) Multiple compared to single steppers used a significantly higher proportion of their hip extension strength and produced less knee and ankle joint peak power during stepping, at the intermediate lean angle. Strength deficits in female compared to male participants may explain why a lower proportion of female participants were able to recover with a single step. The inability to generate sufficient power in the stepping limb appears to be a limiting factor in single step recovery from forward loss of balance. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  7. One-step formation of multiple Pickering emulsions stabilized by self-assembled poly(dodecyl acrylate-co-acrylic acid) nanoparticles.

    PubMed

    Zhu, Ye; Sun, Jianhua; Yi, Chenglin; Wei, Wei; Liu, Xiaoya

    2016-09-13

    In this study, a one-step generation of stable multiple Pickering emulsions using pH-responsive polymeric nanoparticles as the only emulsifier was reported. The polymeric nanoparticles were self-assembled from an amphiphilic random copolymer poly(dodecyl acrylate-co-acrylic acid) (PDAA), and the effect of the copolymer content on the size and morphology of PDAA nanoparticles was determined by dynamic light scattering (DLS) and transmission electron microscopy (TEM). The emulsification study of PDAA nanoparticles revealed that multiple Pickering emulsions could be generated through a one-step phase inversion process by using PDAA nanoparticles as the stabilizer. Moreover, the emulsification performance of PDAA nanoparticles at different pH values demonstrated that multiple emulsions with long-time stability could only be stabilized by PDAA nanoparticles at pH 5.5, indicating that the surface wettability of PDAA nanoparticles plays a crucial role in determining the type and stability of the prepared Pickering emulsions. Additionally, the polarity of oil does not affect the emulsification performance of PDAA nanoparticles, and a wide range of oils could be used as the oil phase to prepare multiple emulsions. These results demonstrated that multiple Pickering emulsions could be generated via the one-step emulsification process using self-assembled polymeric nanoparticles as the stabilizer, and the prepared multiple emulsions have promising potential to be applied in the cosmetic, medical, and food industries.

  8. Better dual-task processing in simultaneous interpreters

    PubMed Central

    Strobach, Tilo; Becker, Maxi; Schubert, Torsten; Kühn, Simone

    2015-01-01

    Simultaneous interpreting (SI) is a highly complex activity and requires the performance and coordination of multiple, simultaneous tasks: analysis and understanding of the discourse in a first language, reformulating linguistic material, storing of intermediate processing steps, and language production in a second language among others. It is, however, an open issue whether persons with experience in SI possess superior skills in coordination of multiple tasks and whether they are able to transfer these skills to lab-based dual-task situations. Within the present study, we set out to explore whether interpreting experience is associated with related higher-order executive functioning in the context of dual-task situations of the Psychological Refractory Period (PRP) type. In this PRP situation, we found faster reactions times in participants with experience in simultaneous interpretation in contrast to control participants without such experience. Thus, simultaneous interpreters possess superior skills in coordination of multiple tasks in lab-based dual-task situations. PMID:26528232

  9. Trehalose glycopolymer resists allow direct writing of protein patterns by electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Bat, Erhan; Lee, Juneyoung; Lau, Uland Y.; Maynard, Heather D.

    2015-03-01

    Direct-write patterning of multiple proteins on surfaces is of tremendous interest for a myriad of applications. Precise arrangement of different proteins at increasingly smaller dimensions is a fundamental challenge to apply the materials in tissue engineering, diagnostics, proteomics and biosensors. Herein, we present a new resist that protects proteins during electron-beam exposure and its application in direct-write patterning of multiple proteins. Polymers with pendant trehalose units are shown to effectively crosslink to surfaces as negative resists, while at the same time providing stabilization to proteins during the vacuum and electron-beam irradiation steps. In this manner, arbitrary patterns of several different classes of proteins such as enzymes, growth factors and immunoglobulins are realized. Utilizing the high-precision alignment capability of electron-beam lithography, surfaces with complex patterns of multiple proteins are successfully generated at the micrometre and nanometre scale without requiring cleanroom conditions.

  10. Sparse matrix-vector multiplication on network-on-chip

    NASA Astrophysics Data System (ADS)

    Sun, C.-C.; Götze, J.; Jheng, H.-Y.; Ruan, S.-J.

    2010-12-01

    In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.

  11. Realization of quantum gates with multiple control qubits or multiple target qubits in a cavity

    NASA Astrophysics Data System (ADS)

    Waseem, Muhammad; Irfan, Muhammad; Qamar, Shahid

    2015-06-01

    We propose a scheme to realize a three-qubit controlled phase gate and a multi-qubit controlled NOT gate of one qubit simultaneously controlling n-target qubits with a four-level quantum system in a cavity. The implementation time for multi-qubit controlled NOT gate is independent of the number of qubit. Three-qubit phase gate is generalized to n-qubit phase gate with multiple control qubits. The number of steps reduces linearly as compared to conventional gate decomposition method. Our scheme can be applied to various types of physical systems such as superconducting qubits coupled to a resonator and trapped atoms in a cavity. Our scheme does not require adjustment of level spacing during the gate implementation. We also show the implementation of Deutsch-Joza algorithm. Finally, we discuss the imperfections due to cavity decay and the possibility of physical implementation of our scheme.

  12. Multiple-Beam Detection of Fast Transient Radio Sources

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.

    2011-01-01

    A method has been designed for using multiple independent stations to discriminate fast transient radio sources from local anomalies, such as antenna noise or radio frequency interference (RFI). This can improve the sensitivity of incoherent detection for geographically separated stations such as the very long baseline array (VLBA), the future square kilometer array (SKA), or any other coincident observations by multiple separated receivers. The transients are short, broadband pulses of radio energy, often just a few milliseconds long, emitted by a variety of exotic astronomical phenomena. They generally represent rare, high-energy events making them of great scientific value. For RFI-robust adaptive detection of transients, using multiple stations, a family of algorithms has been developed. The technique exploits the fact that the separated stations constitute statistically independent samples of the target. This can be used to adaptively ignore RFI events for superior sensitivity. If the antenna signals are independent and identically distributed (IID), then RFI events are simply outlier data points that can be removed through robust estimation such as a trimmed or Winsorized estimator. The alternative "trimmed" estimator is considered, which excises the strongest n signals from the list of short-beamed intensities. Because local RFI is independent at each antenna, this interference is unlikely to occur at many antennas on the same step. Trimming the strongest signals provides robustness to RFI that can theoretically outperform even the detection performance of the same number of antennas at a single site. This algorithm requires sorting the signals at each time step and dispersion measure, an operation that is computationally tractable for existing array sizes. An alternative uses the various stations to form an ensemble estimate of the conditional density function (CDF) evaluated at each time step. Both methods outperform standard detection strategies on a test sequence of VLBA data, and both are efficient enough for deployment in real-time, online transient detection applications.

  13. Impact of favorite stimuli automatically delivered on step responses of persons with multiple disabilities during their use of walker devices.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Campodonico, Francesca; Piazzolla, Giorgia; Scalini, Lorenza; Oliva, Doretta

    2005-01-01

    Favorite stimuli were automatically delivered contingent on the performance of steps by two persons (a boy and a woman) with multiple disabilities during their use of support walker devices. The study lasted about 4 months and was carried out according to a multiple baseline design across participants. Recording concerned the participants' frequencies of steps and their indices of happiness during baseline and intervention sessions. Data showed that both participants had a significant increase in each of these two measures during the intervention phase. Implications of the findings and new research issues are discussed.

  14. Fluidics cube for biosensor miniaturization

    NASA Technical Reports Server (NTRS)

    Dodson, J. M.; Feldstein, M. J.; Leatzow, D. M.; Flack, L. K.; Golden, J. P.; Ligler, F. S.

    2001-01-01

    To create a small, portable, fully automated biosensor, a compact means of fluid handling is required. We designed, manufactured, and tested a "fluidics cube" for such a purpose. This cube, made of thermoplastic, contains reservoirs and channels for liquid samples and reagents and operates without the use of any internal valves or meters; it is a passive fluid circuit that relies on pressure relief vents to control fluid movement. We demonstrate the ability of pressure relief vents to control fluid movement and show how to simply manufacture or modify the cube. Combined with the planar array biosensor developed at the Naval Research Laboratory, it brings us one step closer to realizing our goal of a handheld biosensor capable of analyzing multiple samples for multiple analytes.

  15. Making three-dimensional echocardiography more tangible: a workflow for three-dimensional printing with echocardiographic data.

    PubMed

    Mashari, Azad; Montealegre-Gallegos, Mario; Knio, Ziyad; Yeh, Lu; Jeganathan, Jelliffe; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze

    2016-12-01

    Three-dimensional (3D) printing is a rapidly evolving technology with several potential applications in the diagnosis and management of cardiac disease. Recently, 3D printing (i.e. rapid prototyping) derived from 3D transesophageal echocardiography (TEE) has become possible. Due to the multiple steps involved and the specific equipment required for each step, it might be difficult to start implementing echocardiography-derived 3D printing in a clinical setting. In this review, we provide an overview of this process, including its logistics and organization of tools and materials, 3D TEE image acquisition strategies, data export, format conversion, segmentation, and printing. Generation of patient-specific models of cardiac anatomy from echocardiographic data is a feasible, practical application of 3D printing technology. © 2016 The authors.

  16. A simple and novel method for RNA-seq library preparation of single cell cDNA analysis by hyperactive Tn5 transposase.

    PubMed

    Brouilette, Scott; Kuersten, Scott; Mein, Charles; Bozek, Monika; Terry, Anna; Dias, Kerith-Rae; Bhaw-Rosun, Leena; Shintani, Yasunori; Coppen, Steven; Ikebe, Chiho; Sawhney, Vinit; Campbell, Niall; Kaneko, Masahiro; Tano, Nobuko; Ishida, Hidekazu; Suzuki, Ken; Yashiro, Kenta

    2012-10-01

    Deep sequencing of single cell-derived cDNAs offers novel insights into oncogenesis and embryogenesis. However, traditional library preparation for RNA-seq analysis requires multiple steps with consequent sample loss and stochastic variation at each step significantly affecting output. Thus, a simpler and better protocol is desirable. The recently developed hyperactive Tn5-mediated library preparation, which brings high quality libraries, is likely one of the solutions. Here, we tested the applicability of hyperactive Tn5-mediated library preparation to deep sequencing of single cell cDNA, optimized the protocol, and compared it with the conventional method based on sonication. This new technique does not require any expensive or special equipment, which secures wider availability. A library was constructed from only 100 ng of cDNA, which enables the saving of precious specimens. Only a few steps of robust enzymatic reaction resulted in saved time, enabling more specimens to be prepared at once, and with a more reproducible size distribution among the different specimens. The obtained RNA-seq results were comparable to the conventional method. Thus, this Tn5-mediated preparation is applicable for anyone who aims to carry out deep sequencing for single cell cDNAs. Copyright © 2012 Wiley Periodicals, Inc.

  17. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    PubMed

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = <0.001). In regression analysis, CSRT was best explained by sway, time to complete the 9-Hole Peg test, knee extension strength of the weaker leg, proprioception and the time to complete the Trails B test (multiple R 2   =   0.449, p < 0.001). Conclusions A simple low tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  18. RNA helicase MOV10 functions as a co-factor of HIV-1 Rev to facilitate Rev/RRE-dependent nuclear export of viral mRNAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Feng; Zhang, Junsong; Zhang, Yijun

    Human immunodeficiency virus type 1 (HIV-1) exploits multiple host factors during its replication. The REV/RRE-dependent nuclear export of unspliced/partially spliced viral transcripts needs the assistance of host proteins. Recent studies have shown that MOV10 overexpression inhibited HIV-1 replication at various steps. However, the endogenous MOV10 was required in certain step(s) of HIV-1 replication. In this report, we found that MOV10 potently enhances the nuclear export of viral mRNAs and subsequently increases the expression of Gag protein and other late products through affecting the Rev/RRE axis. The co-immunoprecipitation analysis indicated that MOV10 interacts with Rev in an RNA-independent manner. The DEAG-boxmore » of MOV10 was required for the enhancement of Rev/RRE-dependent nuclear export and the DEAG-box mutant showed a dominant-negative activity. Our data propose that HIV-1 utilizes the anti-viral factor MOV10 to function as a co-factor of Rev and demonstrate the complicated effects of MOV10 on HIV-1 life cycle. - Highlights: • MOV10 can function as a co-factor of HIV-1 Rev. • MOV10 facilitates Rev/RRE-dependent transport of viral mRNAs. • MOV10 interacts with Rev in an RNA-independent manner. • The DEAG-box of MOV10 is required for the enhancement of Rev/RRE-dependent export.« less

  19. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  20. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  1. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  2. KEEPING A STEP AHEAD - FORMATIVE PHASE OF A WORKPLACE INTERVENTION TRIAL TO PREVENT OBESITY

    PubMed Central

    Zapka, Jane; Lemon, Stephenie C.; Estabrook, Barbara B.; Jolicoeur, Denise G.

    2008-01-01

    Background Ecological interventions hold promise for promoting overweight and obesity prevention in worksites. Given the paucity of evaluative research in the hospital worksite setting, considerable formative work is required for successful implementation and evaluation. Purpose This paper describes the formative phases of Step Ahead, a site-randomized controlled trial of a multi-level intervention that promotes physical activity and healthy eating in 6 hospitals in central Massachusetts. The purpose of the formative research phase was to increase the feasibility, effectiveness and likelihood of sustainability of the intervention. Design and Procedures The Step Ahead ecological intervention approach targets change at the organization, the interpersonal work environment and the individual levels. The intervention was developed using fundamental steps of intervention mapping and important tenets of participatory research. Formative research methods were used to engage leadership support and assistance and to develop an intervention plan that is both theoretically and practically grounded. This report uses observational data, program minutes and reports, and process tracking data. Developmental Strategies and Observations Leadership involvement (key informant interviews and advisory boards), employee focus groups and advisory boards, and quantitative environmental assessments cultivated participation and support. Determining multiple foci of change and designing measurable objectives and generic assessment tools to document progress are complex challenges encountered in planning phases. Lessons Learned Multi-level trials in diverse organizations require flexibility and balance of theory application and practice-based perspectives to affect impact and outcome objectives. Formative research is an essential component. PMID:18073339

  3. Autoantigen La promotes efficient RNAi, antiviral response, and transposon silencing by facilitating multiple-turnover RISC catalysis.

    PubMed

    Liu, Ying; Tan, Huiling; Tian, Hui; Liang, Chunyang; Chen, She; Liu, Qinghua

    2011-11-04

    The effector of RNA interference (RNAi) is the RNA-induced silencing complex (RISC). C3PO promotes the activation of RISC by degrading the Argonaute2 (Ago2)-nicked passenger strand of duplex siRNA. Active RISC is a multiple-turnover enzyme that uses the guide strand of siRNA to direct the Ago2-mediated sequence-specific cleavage of complementary mRNA. How this effector step of RNAi is regulated is currently unknown. Here, we used the human Ago2 minimal RISC system to purify Sjögren's syndrome antigen B (SSB)/autoantigen La as an activator of the RISC-mediated mRNA cleavage activity. Our reconstitution studies showed that La could promote multiple-turnover RISC catalysis by facilitating the release of cleaved mRNA from RISC. Moreover, we demonstrated that La was required for efficient RNAi, antiviral defense, and transposon silencing in vivo. Taken together, the findings of C3PO and La reveal a general concept that regulatory factors are required to remove Ago2-cleaved products to assemble or restore active RISC. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  5. Efficient Generation of Somatic Cell Nuclear Transfer-Competent Porcine Cells with Mutated Alleles at Multiple Target Loci by Using CRISPR/Cas9 Combined with Targeted Toxin-Based Selection System.

    PubMed

    Sato, Masahiro; Miyoshi, Kazuchika; Nakamura, Shingo; Ohtsuka, Masato; Sakurai, Takayuki; Watanabe, Satoshi; Kawaguchi, Hiroaki; Tanimoto, Akihide

    2017-12-04

    The recent advancement in genome editing such a CRISPR/Cas9 system has enabled isolation of cells with knocked multiple alleles through a one-step transfection. Somatic cell nuclear transfer (SCNT) has been frequently employed as one of the efficient tools for the production of genetically modified (GM) animals. To use GM cells as SCNT donor, efficient isolation of transfectants with mutations at multiple target loci is often required. The methods for the isolation of such GM cells largely rely on the use of drug selection-based approach using selectable genes; however, it is often difficult to isolate cells with mutations at multiple target loci. In this study, we used a novel approach for the efficient isolation of porcine cells with at least two target loci mutations by one-step introduction of CRISPR/Cas9-related components. A single guide (sg) RNA targeted to GGTA1 gene, involved in the synthesis of cell-surface α-Gal epitope (known as xenogenic antigen), is always a prerequisite. When the transfected cells were reacted with toxin-labeled BS-I-B₄ isolectin for 2 h at 37 C to eliminate α-Gal epitope-expressing cells, the surviving clones lacked α-Gal epitope expression and were highly expected to exhibit induced mutations at another target loci. Analysis of these α-Gal epitope-negative surviving cells demonstrated a 100% occurrence of genome editing at target loci. SCNT using these cells as donors resulted in the production of cloned blastocysts with the genotype similar to that of the donor cells used. Thus, this novel system will be useful for SCNT-mediated acquisition of GM cloned piglets, in which multiple target loci may be mutated.

  6. A time-driven, activity-based costing methodology for determining the costs of red blood cell transfusion in patients with beta thalassaemia major.

    PubMed

    Burns, K E; Haysom, H E; Higgins, A M; Waters, N; Tahiri, R; Rushford, K; Dunstan, T; Saxby, K; Kaplan, Z; Chunilal, S; McQuilten, Z K; Wood, E M

    2018-04-10

    To describe the methodology to estimate the total cost of administration of a single unit of red blood cells (RBC) in adults with beta thalassaemia major in an Australian specialist haemoglobinopathy centre. Beta thalassaemia major is a genetic disorder of haemoglobin associated with multiple end-organ complications and typically requiring lifelong RBC transfusion therapy. New therapeutic agents are becoming available based on advances in understanding of the disorder and its consequences. Assessment of the true total cost of transfusion, incorporating both product and activity costs, is required in order to evaluate the benefits and costs of these new therapies. We describe the bottom-up, time-driven, activity-based costing methodology used to develop process maps to provide a step-by-step outline of the entire transfusion pathway. Detailed flowcharts for each process are described. Direct observations and timing of the process maps document all activities, resources, staff, equipment and consumables in detail. The analysis will include costs associated with performing these processes, including resources and consumables. Sensitivity analyses will be performed to determine the impact of different staffing levels, timings and probabilities associated with performing different tasks. Thirty-one process maps have been developed, with over 600 individual activities requiring multiple timings. These will be used for future detailed cost analyses. Detailed process maps using bottom-up, time-driven, activity-based costing for determining the cost of RBC transfusion in thalassaemia major have been developed. These could be adapted for wider use to understand and compare the costs and complexities of transfusion in other settings. © 2018 British Blood Transfusion Society.

  7. 'Scaling-up is a craft not a science': Catalysing scale-up of health innovations in Ethiopia, India and Nigeria.

    PubMed

    Spicer, Neil; Bhattacharya, Dipankar; Dimka, Ritgak; Fanta, Feleke; Mangham-Jefferies, Lindsay; Schellenberg, Joanna; Tamire-Woldemariam, Addis; Walt, Gill; Wickremasinghe, Deepthi

    2014-11-01

    Donors and other development partners commonly introduce innovative practices and technologies to improve health in low and middle income countries. Yet many innovations that are effective in improving health and survival are slow to be translated into policy and implemented at scale. Understanding the factors influencing scale-up is important. We conducted a qualitative study involving 150 semi-structured interviews with government, development partners, civil society organisations and externally funded implementers, professional associations and academic institutions in 2012/13 to explore scale-up of innovative interventions targeting mothers and newborns in Ethiopia, the Indian state of Uttar Pradesh and the six states of northeast Nigeria, which are settings with high burdens of maternal and neonatal mortality. Interviews were analysed using a common analytic framework developed for cross-country comparison and themes were coded using Nvivo. We found that programme implementers across the three settings require multiple steps to catalyse scale-up. Advocating for government to adopt and finance health innovations requires: designing scalable innovations; embedding scale-up in programme design and allocating time and resources; building implementer capacity to catalyse scale-up; adopting effective approaches to advocacy; presenting strong evidence to support government decision making; involving government in programme design; invoking policy champions and networks; strengthening harmonisation among external programmes; aligning innovations with health systems and priorities. Other steps include: supporting government to develop policies and programmes and strengthening health systems and staff; promoting community uptake by involving media, community leaders, mobilisation teams and role models. We conclude that scale-up has no magic bullet solution - implementers must embrace multiple activities, and require substantial support from donors and governments in doing so. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Sub-daily Statistical Downscaling of Meteorological Variables Using Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jitendra; Brooks, Bjørn-Gustaf J.; Thornton, Peter E

    2012-01-01

    A new open source neural network temporal downscaling model is described and tested using CRU-NCEP reanal ysis and CCSM3 climate model output. We downscaled multiple meteorological variables in tandem from monthly to sub-daily time steps while also retaining consistent correlations between variables. We found that our feed forward, error backpropagation approach produced synthetic 6 hourly meteorology with biases no greater than 0.6% across all variables and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected (original) monthly means exceeded 0.99 for all variables, which indicates thatmore » this approach would work well for generating atmospheric forcing data consistent with mass and energy conserved GCM output. Our neural network approach performed well for variables that had correlations to other variables of about 0.3 and better and its skill was increased by downscaling multiple correlated variables together. Poor replication of precipitation intensity however required further post-processing in order to obtain the expected probability distribution. The concurrence of precipitation events with expected changes in sub ordinate variables (e.g., less incident shortwave radiation during precipitation events) were nearly as consistent in the downscaled data as in the training data with probabilities that differed by no more than 6%. Our downscaling approach requires training data at the target time step and relies on a weak assumption that climate variability in the extrapolated data is similar to variability in the training data.« less

  9. Two-port robotic hysterectomy: a novel approach.

    PubMed

    Moawad, Gaby N; Tyan, Paul; Khalil, Elias D Abi

    2018-03-24

    The objective of the study was to demonstrate a novel technique for two-port robotic hysterectomy with a particular focus on the challenging portions of the procedure. The study is designed as a technical video, showing step-by-step a two-port robotic hysterectomy approach (Canadian Task Force classification level III). IRB approval was not required for this study. The benefits of minimally invasive surgery for gynecological pathology have been clearly documented in multiple studies. Patients had fewer medical and surgical complications postoperatively, better cosmesis and quality of life. Most gynecological surgeons require 3-5 ports for the standard gynecological procedure. Even though the minimally invasive multiport system provides an excellent safety profile, multiple incisions are associated with a greater risk for morbidity including infection, pain, and hernia. In the past decade, various new methods have emerged to minimize the number of ports used in gynecological surgery. The interventions employed were a two-port robotic hysterectomy, using a camera port plus one robotic arm, with a focus on salpingectomy and cuff closure. We describe a transvaginal and a transabdominal approach for salpingectomy and a novel method for cuff closure. The transvaginal and transabdominal techniques for salpingectomy for two-port robotic-assisted hysterectomy provide excellent tension and exposure for a safe procedure without the need for an extra port. We also describe a transvaginal technique to place the vaginal cuff on tension during closure. With the necessary set of skills on a carefully chosen patient, two-port robotic-assisted total laparoscopic hysterectomy is a feasible procedure.

  10. Using regression equations built from summary data in the psychological assessment of the individual case: extension to multiple regression.

    PubMed

    Crawford, John R; Garthwaite, Paul H; Denham, Annie K; Chelune, Gordon J

    2012-12-01

    Regression equations have many useful roles in psychological assessment. Moreover, there is a large reservoir of published data that could be used to build regression equations; these equations could then be employed to test a wide variety of hypotheses concerning the functioning of individual cases. This resource is currently underused because (a) not all psychologists are aware that regression equations can be built not only from raw data but also using only basic summary data for a sample, and (b) the computations involved are tedious and prone to error. In an attempt to overcome these barriers, Crawford and Garthwaite (2007) provided methods to build and apply simple linear regression models using summary statistics as data. In the present study, we extend this work to set out the steps required to build multiple regression models from sample summary statistics and the further steps required to compute the associated statistics for drawing inferences concerning an individual case. We also develop, describe, and make available a computer program that implements these methods. Although there are caveats associated with the use of the methods, these need to be balanced against pragmatic considerations and against the alternative of either entirely ignoring a pertinent data set or using it informally to provide a clinical "guesstimate." Upgraded versions of earlier programs for regression in the single case are also provided; these add the point and interval estimates of effect size developed in the present article.

  11. Method of upgrading oils containing hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline

    DOEpatents

    Baker, Eddie G.; Elliott, Douglas C.

    1993-01-01

    The present invention is a multi-stepped method of converting an oil which is produced by various biomass and coal conversion processes and contains primarily single and multiple ring hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline. The single and multiple ring hydroxyaromatic hydrocarbon compounds in a raw oil material are first deoxygenated to produce a deoxygenated oil material containing single and multiple ring aromatic compounds. Then, water is removed from the deoxygenated oil material. The next step is distillation to remove the single ring aromatic compouns as gasoline. In the third step, the multiple ring aromatics remaining in the deoxygenated oil material are cracked in the presence of hydrogen to produce a cracked oil material containing single ring aromatic compounds. Finally, the cracked oil material is then distilled to remove the single ring aromatics as gasoline.

  12. Method of upgrading oils containing hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline

    DOEpatents

    Baker, E.G.; Elliott, D.C.

    1993-01-19

    The present invention is a multi-stepped method of converting an oil which is produced by various biomass and coal conversion processes and contains primarily single and multiple ring hydroxyaromatic hydrocarbon compounds to highly aromatic gasoline. The single and multiple ring hydroxyaromatic hydrocarbon compounds in a raw oil material are first deoxygenated to produce a deoxygenated oil material containing single and multiple ring aromatic compounds. Then, water is removed from the deoxygenated oil material. The next step is distillation to remove the single ring aromatic compounds as gasoline. In the third step, the multiple ring aromatics remaining in the deoxygenated oil material are cracked in the presence of hydrogen to produce a cracked oil material containing single ring aromatic compounds. Finally, the cracked oil material is then distilled to remove the single ring aromatics as gasoline.

  13. Text-based Analytics for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah

    The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less

  14. Evaluation of Multiple Immunoassay Technology Platforms to Select the Anti-Drug Antibody Assay Exhibiting the Most Appropriate Drug and Target Tolerance

    PubMed Central

    Collet-Brose, Justine

    2016-01-01

    The aim of this study was, at the assay development stage and thus with an appropriate degree of rigor, to select the most appropriate technology platform and sample pretreatment procedure for a clinical ADA assay. Thus, ELISA, MSD, Gyrolab, and AlphaLISA immunoassay platforms were evaluated in association with target depletion and acid dissociation sample pretreatment steps. An acid dissociation step successfully improved the drug tolerance for all 4 technology platforms and the required drug tolerance was achieved with the Gyrolab and MSD platforms. The target tolerance was shown to be better for the ELISA format, where an acid dissociation treatment step alone was sufficient to achieve the desired target tolerance. However, inclusion of a target depletion step in conjunction with the acid treatment raised the target tolerance to the desired level for all of the technologies. A higher sensitivity was observed for the MSD and Gyrolab assays and the ELISA, MSD, and Gyrolab all displayed acceptable interdonor variability. This study highlights the usefulness of evaluating the performance of different assay platforms at an early stage in the assay development process to aid in the selection of the best fit-for-purpose technology platform and sample pretreatment steps. PMID:27243038

  15. Parsing the roles of neck-linker docking and tethered head diffusion in the stepping dynamics of kinesin.

    PubMed

    Zhang, Zhechun; Goldtzvik, Yonathan; Thirumalai, D

    2017-11-14

    Kinesin walks processively on microtubules (MTs) in an asymmetric hand-over-hand manner consuming one ATP molecule per 16-nm step. The individual contributions due to docking of the approximately 13-residue neck linker to the leading head (deemed to be the power stroke) and diffusion of the trailing head (TH) that contributes in propelling the motor by 16 nm have not been quantified. We use molecular simulations by creating a coarse-grained model of the MT-kinesin complex, which reproduces the measured stall force as well as the force required to dislodge the motor head from the MT, to show that nearly three-quarters of the step occurs by bidirectional stochastic motion of the TH. However, docking of the neck linker to the leading head constrains the extent of diffusion and minimizes the probability that kinesin takes side steps, implying that both the events are necessary in the motility of kinesin and for the maintenance of processivity. Surprisingly, we find that during a single step, the TH stochastically hops multiple times between the geometrically accessible neighboring sites on the MT before forming a stable interaction with the target binding site with correct orientation between the motor head and the [Formula: see text] tubulin dimer.

  16. Power- and Low-Resistance-State-Dependent, Bipolar Reset-Switching Transitions in SiN-Based Resistive Random-Access Memory

    NASA Astrophysics Data System (ADS)

    Kim, Sungjun; Park, Byung-Gook

    2016-08-01

    A study on the bipolar-resistive switching of an Ni/SiN/Si-based resistive random-access memory (RRAM) device shows that the influences of the reset power and the resistance value of the low-resistance state (LRS) on the reset-switching transitions are strong. For a low LRS with a large conducting path, the sharp reset switching, which requires a high reset power (>7 mW), was observed, whereas for a high LRS with small multiple-conducting paths, the step-by-step reset switching with a low reset power (<7 mW) was observed. The attainment of higher nonlinear current-voltage ( I-V) characteristics in terms of the step-by-step reset switching is due to the steep current-increased region of the trap-controlled space charge-limited current (SCLC) model. A multilevel cell (MLC) operation, for which the reset stop voltage ( V STOP) is used in the DC sweep mode and an incremental amplitude is used in the pulse mode for the step-by-step reset switching, is demonstrated here. The results of the present study suggest that well-controlled conducting paths in a SiN-based RRAM device, which are not too strong and not too weak, offer considerable potential for the realization of low-power and high-density crossbar-array applications.

  17. Synthesis of phenanthridinones from N-methoxybenzamides and arenes by multiple palladium-catalyzed C-H activation steps at room temperature.

    PubMed

    Karthikeyan, Jaganathan; Cheng, Chien-Hong

    2011-10-10

    Many steps make light work: substituted phenanthridinones can be obtained with high regioselectivity and in very good yields by palladium-catalyzed cyclization reactions of N-methoxybenzamides with arenes. The reaction proceeds through multiple oxidative C-H activation and C-C/C-N formation steps in one pot at room temperature, and thus provides a simple method for generating bioactive phenanthridinones. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A Microwell-Printing Fabrication Strategy for the On-Chip Templated Biosynthesis of Protein Microarrays for Surface Plasmon Resonance Imaging

    PubMed Central

    Manuel, Gerald; Lupták, Andrej; Corn, Robert M.

    2017-01-01

    A two-step templated, ribosomal biosynthesis/printing method for the fabrication of protein microarrays for surface plasmon resonance imaging (SPRI) measurements is demonstrated. In the first step, a sixteen component microarray of proteins is created in microwells by cell free on chip protein synthesis; each microwell contains both an in vitro transcription and translation (IVTT) solution and 350 femtomoles of a specific DNA template sequence that together are used to create approximately 40 picomoles of a specific hexahistidine-tagged protein. In the second step, the protein microwell array is used to contact print one or more protein microarrays onto nitrilotriacetic acid (NTA)-functionalized gold thin film SPRI chips for real-time SPRI surface bioaffinity adsorption measurements. Even though each microwell array element only contains approximately 40 picomoles of protein, the concentration is sufficiently high for the efficient bioaffinity adsorption and capture of the approximately 100 femtomoles of hexahistidine-tagged protein required to create each SPRI microarray element. As a first example, the protein biosynthesis process is verified with fluorescence imaging measurements of a microwell array containing His-tagged green fluorescent protein (GFP), yellow fluorescent protein (YFP) and mCherry (RFP), and then the fidelity of SPRI chips printed from this protein microwell array is ascertained by measuring the real-time adsorption of various antibodies specific to these three structurally related proteins. This greatly simplified two-step synthesis/printing fabrication methodology eliminates most of the handling, purification and processing steps normally required in the synthesis of multiple protein probes, and enables the rapid fabrication of SPRI protein microarrays from DNA templates for the study of protein-protein bioaffinity interactions. PMID:28706572

  19. Using Multiple-Stimulus without Replacement Preference Assessments to Increase Student Engagement and Performance

    ERIC Educational Resources Information Center

    Weaver, Adam D.; McKevitt, Brian C.; Farris, Allie M.

    2017-01-01

    Multiple-stimulus without replacement preference assessment is a research-based method for identifying appropriate rewards for students with emotional and behavioral disorders. This article presents a brief history of how this technology evolved and describes a step-by-step approach for conducting the procedure. A discussion of necessary materials…

  20. Identification of the potentiating mutations and synergistic epistasis that enabled the evolution of inter-species cooperation

    DOE PAGES

    Douglas, Sarah M.; Chubiz, Lon M.; Harcombe, William R.; ...

    2017-05-11

    Microbes often engage in cooperation through releasing biosynthetic compounds required by other species to grow. Given that production of costly biosynthetic metabolites is generally subjected to multiple layers of negative feedback, single mutations may frequently be insufficient to generate cooperative phenotypes. Synergistic epistatic interactions between multiple coordinated changes may thus often underlie the evolution of cooperation through overproduction of metabolites. To test the importance of synergistic mutations in cooperation we used an engineered bacterial consortium of an Escherichia coli methionine auxotroph and Salmonella enterica. S. enterica relies on carbon by-products from E. coli if lactose is the only carbon source.more » Directly selecting wild-type S. enterica in an environment that favored cooperation through secretion of methionine only once led to a methionine producer, and this producer both took a long time to emerge and was not very effective at cooperating. On the other hand, when an initial selection for resistance of S. enterica to a toxic methionine analog, ethionine, was used, subsequent selection for cooperation with E. coli was rapid, and the resulting double mutants were much more effective at cooperation. We found that potentiating mutations in metJ increase expression of metA, which encodes the first step of methionine biosynthesis. This increase in expression is required for the previously identified actualizing mutations in metA to generate cooperation. This work highlights that where biosynthesis of metabolites involves multiple layers of regulation, significant secretion of those metabolites may require multiple mutations, thereby constraining the evolution of cooperation.« less

  1. Identification of the potentiating mutations and synergistic epistasis that enabled the evolution of inter-species cooperation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas, Sarah M.; Chubiz, Lon M.; Harcombe, William R.

    Microbes often engage in cooperation through releasing biosynthetic compounds required by other species to grow. Given that production of costly biosynthetic metabolites is generally subjected to multiple layers of negative feedback, single mutations may frequently be insufficient to generate cooperative phenotypes. Synergistic epistatic interactions between multiple coordinated changes may thus often underlie the evolution of cooperation through overproduction of metabolites. To test the importance of synergistic mutations in cooperation we used an engineered bacterial consortium of an Escherichia coli methionine auxotroph and Salmonella enterica. S. enterica relies on carbon by-products from E. coli if lactose is the only carbon source.more » Directly selecting wild-type S. enterica in an environment that favored cooperation through secretion of methionine only once led to a methionine producer, and this producer both took a long time to emerge and was not very effective at cooperating. On the other hand, when an initial selection for resistance of S. enterica to a toxic methionine analog, ethionine, was used, subsequent selection for cooperation with E. coli was rapid, and the resulting double mutants were much more effective at cooperation. We found that potentiating mutations in metJ increase expression of metA, which encodes the first step of methionine biosynthesis. This increase in expression is required for the previously identified actualizing mutations in metA to generate cooperation. This work highlights that where biosynthesis of metabolites involves multiple layers of regulation, significant secretion of those metabolites may require multiple mutations, thereby constraining the evolution of cooperation.« less

  2. Viroporins, Examples of the Two-Stage Membrane Protein Folding Model.

    PubMed

    Martinez-Gil, Luis; Mingarro, Ismael

    2015-06-26

    Viroporins are small, α-helical, hydrophobic virus encoded proteins, engineered to form homo-oligomeric hydrophilic pores in the host membrane. Viroporins participate in multiple steps of the viral life cycle, from entry to budding. As any other membrane protein, viroporins have to find the way to bury their hydrophobic regions into the lipid bilayer. Once within the membrane, the hydrophobic helices of viroporins interact with each other to form higher ordered structures required to correctly perform their porating activities. This two-step process resembles the two-stage model proposed for membrane protein folding by Engelman and Poppot. In this review we use the membrane protein folding model as a leading thread to analyze the mechanism and forces behind the membrane insertion and folding of viroporins. We start by describing the transmembrane segment architecture of viroporins, including the number and sequence characteristics of their membrane-spanning domains. Next, we connect the differences found among viroporin families to their viral genome organization, and finalize focusing on the pathways used by viroporins in their way to the membrane and on the transmembrane helix-helix interactions required to achieve proper folding and assembly.

  3. Long-term Blood Pressure Measurement in Freely Moving Mice Using Telemetry.

    PubMed

    Alam, Mohammad Afaque; Parks, Cory; Mancarella, Salvatore

    2016-05-17

    During the development of new vasoactive agents, arterial blood pressure monitoring is crucial for evaluating the efficacy of the new proposed drugs. Indeed, research focusing on the discovery of new potential therapeutic targets using genetically altered mice requires a reliable, long-term assessment of the systemic arterial pressure variation. Currently, the gold standard for obtaining long-term measurements of blood pressure in ambulatory mice uses implantable radio-transmitters, which require artery cannulation. This technique eliminates the need for tethering, restraining, or anesthetizing the animals which introduce stress and artifacts during data sampling. However, arterial blood pressure monitoring in mice via catheterization can be rather challenging due to the small size of the arteries. Here we present a step-by-step guide to illustrate the crucial key passages for a successful subcutaneous implantation of radio-transmitters and carotid artery cannulation in mice. We also include examples of long-term blood pressure activity taken from freely moving mice after a period of post-surgery recovery. Following this procedure will allow reliable direct blood pressure recordings from multiple animals simultaneously.

  4. Zn2+-dependent Activation of the Trk Signaling Pathway Induces Phosphorylation of the Brain-enriched Tyrosine Phosphatase STEP

    PubMed Central

    Poddar, Ranjana; Rajagopal, Sathyanarayanan; Shuttleworth, C. William; Paul, Surojit

    2016-01-01

    Excessive release of Zn2+ in the brain is implicated in the progression of acute brain injuries. Although several signaling cascades have been reported to be involved in Zn2+-induced neurotoxicity, a potential contribution of tyrosine phosphatases in this process has not been well explored. Here we show that exposure to high concentrations of Zn2+ led to a progressive increase in phosphorylation of the striatal-enriched phosphatase (STEP), a component of the excitotoxic-signaling pathway that plays a role in neuroprotection. Zn2+-mediated phosphorylation of STEP61 at multiple sites (hyperphosphorylation) was induced by the up-regulation of brain-derived neurotropic factor (BDNF), tropomyosin receptor kinase (Trk) signaling, and activation of cAMP-dependent PKA (protein kinase A). Mutational studies further show that differential phosphorylation of STEP61 at the PKA sites, Ser-160 and Ser-221 regulates the affinity of STEP61 toward its substrates. Consistent with these findings we also show that BDNF/Trk/PKA mediated signaling is required for Zn2+-induced phosphorylation of extracellular regulated kinase 2 (ERK2), a substrate of STEP that is involved in Zn2+-dependent neurotoxicity. The strong correlation between the temporal profile of STEP61 hyperphosphorylation and ERK2 phosphorylation indicates that loss of function of STEP61 through phosphorylation is necessary for maintaining sustained ERK2 phosphorylation. This interpretation is further supported by the findings that deletion of the STEP gene led to a rapid and sustained increase in ERK2 phosphorylation within minutes of exposure to Zn2+. The study provides further insight into the mechanisms of regulation of STEP61 and also offers a molecular basis for the Zn2+-induced sustained activation of ERK2. PMID:26574547

  5. Self-aligned quadruple patterning using spacer on spacer integration optimization for N5

    NASA Astrophysics Data System (ADS)

    Thibaut, Sophie; Raley, Angélique; Mohanty, Nihar; Kal, Subhadeep; Liu, Eric; Ko, Akiteru; O'Meara, David; Tapily, Kandabara; Biolsi, Peter

    2017-04-01

    To meet scaling requirements, the semiconductor industry has extended 193nm immersion lithography beyond its minimum pitch limitation using multiple patterning schemes such as self-aligned double patterning, self-aligned quadruple patterning and litho-etch / litho etch iterations. Those techniques have been declined in numerous options in the last few years. Spacer on spacer pitch splitting integration has been proven to show multiple advantages compared to conventional pitch splitting approach. Reducing the number of pattern transfer steps associated with sacrificial layers resulted in significant decrease of cost and an overall simplification of the double pitch split technique. While demonstrating attractive aspects, SAQP spacer on spacer flow brings challenges of its own. Namely, material set selections and etch chemistry development for adequate selectivities, mandrel shape and spacer shape engineering to improve edge placement error (EPE). In this paper we follow up and extend upon our previous learning and proceed into more details on the robustness of the integration in regards to final pattern transfer and full wafer critical dimension uniformity. Furthermore, since the number of intermediate steps is reduced, one will expect improved uniformity and pitch walking control. This assertion will be verified through a thorough pitch walking analysis.

  6. Mass fractionation processes of transition metal isotopes

    NASA Astrophysics Data System (ADS)

    Zhu, X. K.; Guo, Y.; Williams, R. J. P.; O'Nions, R. K.; Matthews, A.; Belshaw, N. S.; Canters, G. W.; de Waal, E. C.; Weser, U.; Burgess, B. K.; Salvato, B.

    2002-06-01

    Recent advances in mass spectrometry make it possible to utilise isotope variations of transition metals to address some important issues in solar system and biological sciences. Realisation of the potential offered by these new isotope systems however requires an adequate understanding of the factors controlling their isotope fractionation. Here we show the results of a broadly based study on copper and iron isotope fractionation during various inorganic and biological processes. These results demonstrate that: (1) naturally occurring inorganic processes can fractionate Fe isotope to a detectable level even at temperature ˜1000°C, which challenges the previous view that Fe isotope variations in natural system are unique biosignatures; (2) multiple-step equilibrium processes at low temperatures may cause large mass fractionation of transition metal isotopes even when the fractionation per single step is small; (3) oxidation-reduction is an importation controlling factor of isotope fractionation of transition metal elements with multiple valences, which opens a wide range of applications of these new isotope systems, ranging from metal-silicate fractionation in the solar system to uptake pathways of these elements in biological systems; (4) organisms incorporate lighter isotopes of transition metals preferentially, and transition metal isotope fractionation occurs stepwise along their pathways within biological systems during their uptake.

  7. Layered nano-gratings by electron beam writing to form 3-level diffractive optical elements for 3D phase-offset holographic lithography.

    PubMed

    Yuan, Liang Leon; Herman, Peter R

    2015-12-21

    A multi-level nanophotonic structure is a major goal in providing advanced optical functionalities as found in photonic crystals and metamaterials. A three-level nano-grating phase mask has been fabricated in an electron-beam resist (ma-N) to meet the requirement of holographic generation of a diamond-like 3D nanostructure in photoresist by a single exposure step. A 2D mask with 600 nm periodicity is presented for generating first order diffracted beams with a preferred π/2 phase shift on the X- and Y-axes and with sufficient 1(st) order diffraction efficiency of 3.5% at 800 nm wavelength for creating a 3D periodic nanostructure in SU-8 photoresist. The resulting 3D structure is anticipated to provide an 8% complete photonic band gap (PBG) upon silicon inversion. A thin SiO2 layer was used to isolate the grating layers and multiple spin-coating steps served to planarize the final resist layer. A reversible soft coating (aquaSAVE) was introduced to enable SEM inspection and verification of each insulating grating layer. This e-beam lithographic method is extensible to assembling multiple layers of a nanophotonic structure.

  8. 5 CFR 531.504 - Level of performance required for quality step increase.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... step increase. 531.504 Section 531.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER THE GENERAL SCHEDULE Quality Step Increases § 531.504 Level of performance required for quality step increase. A quality step increase shall not be required but may be granted only...

  9. Quantitative Analysis of Bioactive Compounds from Aromatic Plants by Means of Dynamic Headspace Extraction and Multiple Headspace Extraction-Gas Chromatography-Mass Spectrometry.

    PubMed

    Omar, Jone; Olivares, Maitane; Alonso, Ibone; Vallejo, Asier; Aizpurua-Olaizola, Oier; Etxebarria, Nestor

    2016-04-01

    Seven monoterpenes in 4 aromatic plants (sage, cardamom, lavender, and rosemary) were quantified in liquid extracts and directly in solid samples by means of dynamic headspace-gas chromatography-mass spectrometry (DHS-GC-MS) and multiple headspace extraction-gas chromatography-mass spectrometry (MHSE), respectively. The monoterpenes were 1st extracted by means of supercritical fluid extraction (SFE) and analyzed by an optimized DHS-GC-MS. The optimization of the dynamic extraction step and the desorption/cryo-focusing step were tackled independently by experimental design assays. The best working conditions were set at 30 °C for the incubation temperature, 5 min of incubation time, and 40 mL of purge volume for the dynamic extraction step of these bioactive molecules. The conditions of the desorption/cryo-trapping step from the Tenax TA trap were set at follows: the temperature was increased from 30 to 300 °C at 150 °C/min, although the cryo-trapping was maintained at -70 °C. In order to estimate the efficiency of the SFE process, the analysis of monoterpenes in the 4 aromatic plants was directly carried out by means of MHSE because it did not require any sample preparation. Good linearity (r2) > 0.99) and reproducibility (relative standard deviation % <12) was obtained for solid and liquid quantification approaches, in the ranges of 0.5 to 200 ng and 10 to 500 ng/mL, respectively. The developed methods were applied to analyze the concentration of 7 monoterpenes in aromatic plants obtaining concentrations in the range of 2 to 6000 ng/g and 0.25 to 110 μg/mg, respectively. © 2016 Institute of Food Technologists®

  10. Geometrical correction of the e-beam proximity effect for raster scan systems

    NASA Astrophysics Data System (ADS)

    Belic, Nikola; Eisenmann, Hans; Hartmann, Hans; Waas, Thomas

    1999-06-01

    Increasing demands on pattern fidelity and CD accuracy in e- beam lithography require a correction of the e-beam proximity effect. The new needs are mainly coming from OPC at mask level and x-ray lithography. The e-beam proximity limits the achievable resolution and affects neighboring structures causing under- or over-exposion depending on the local pattern densities and process settings. Methods to compensate for this unequilibrated does distribution usually use a dose modulation or multiple passes. In general raster scan systems are not able to apply variable doses in order to compensate for the proximity effect. For system of this kind a geometrical modulation of the original pattern offers a solution for compensation of line edge deviations due to the proximity effect. In this paper a new method for the fast correction of the e-beam proximity effect via geometrical pattern optimization is described. The method consists of two steps. In a first step the pattern dependent dose distribution caused by back scattering is calculated by convolution of the pattern with the long range part of the proximity function. The restriction to the long range part result in a quadratic sped gain in computing time for the transformation. The influence of the short range part coming from forward scattering is not pattern dependent and can therefore be determined separately in a second step. The second calculation yields the dose curve at the border of a written structure. The finite gradient of this curve leads to an edge displacement depending on the amount of underground dosage at the observed position which was previously determined in the pattern dependent step. This unintended edge displacement is corrected by splitting the line into segments and shifting them by multiples of the writers address grid to the opposite direction.

  11. One-step global parameter estimation of kinetic inactivation parameters for Bacillus sporothermodurans spores under static and dynamic thermal processes.

    PubMed

    Cattani, F; Dolan, K D; Oliveira, S D; Mishra, D K; Ferreira, C A S; Periago, P M; Aznar, A; Fernandez, P S; Valdramidis, V P

    2016-11-01

    Bacillus sporothermodurans produces highly heat-resistant endospores, that can survive under ultra-high temperature. High heat-resistant sporeforming bacteria are one of the main causes for spoilage and safety of low-acid foods. They can be used as indicators or surrogates to establish the minimum requirements for heat processes, but it is necessary to understand their thermal inactivation kinetics. The aim of the present work was to study the inactivation kinetics under both static and dynamic conditions in a vegetable soup. Ordinary least squares one-step regression and sequential procedures were applied for estimating these parameters. Results showed that multiple dynamic heating profiles, when analyzed simultaneously, can be used to accurately estimate the kinetic parameters while significantly reducing estimation errors and data collection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Reenacting the birth of an intron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellsten, Uffe; Aspden, Julie L.; Rio, Donald C.

    2011-07-01

    An intron is an extended genomic feature whose function requires multiple constrained positions - donor and acceptor splice sites, a branch point, a polypyrimidine tract and suitable splicing enhancers - that may be distributed over hundreds or thousands of nucleotides. New introns are therefore unlikely to emerge by incremental accumulation of functional sub-elements. Here we demonstrate that a functional intron can be created de novo in a single step by a segmental genomic duplication. This experiment recapitulates in vivo the birth of an intron that arose in the ancestral jawed vertebrate lineage nearly half a billion years ago.

  13. CellSort: a support vector machine tool for optimizing fluorescence-activated cell sorting and reducing experimental effort.

    PubMed

    Yu, Jessica S; Pertusi, Dante A; Adeniran, Adebola V; Tyo, Keith E J

    2017-03-15

    High throughput screening by fluorescence activated cell sorting (FACS) is a common task in protein engineering and directed evolution. It can also be a rate-limiting step if high false positive or negative rates necessitate multiple rounds of enrichment. Current FACS software requires the user to define sorting gates by intuition and is practically limited to two dimensions. In cases when multiple rounds of enrichment are required, the software cannot forecast the enrichment effort required. We have developed CellSort, a support vector machine (SVM) algorithm that identifies optimal sorting gates based on machine learning using positive and negative control populations. CellSort can take advantage of more than two dimensions to enhance the ability to distinguish between populations. We also present a Bayesian approach to predict the number of sorting rounds required to enrich a population from a given library size. This Bayesian approach allowed us to determine strategies for biasing the sorting gates in order to reduce the required number of enrichment rounds. This algorithm should be generally useful for improve sorting outcomes and reducing effort when using FACS. Source code available at http://tyolab.northwestern.edu/tools/ . k-tyo@northwestern.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent CCTV systems.

    PubMed

    Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi

    2008-10-01

    Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.

  15. A Specific Long-Term Plan for Management of U.S. Nuclear Spent Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Salomon

    2006-07-01

    A specific plan consisting of six different steps is proposed to accelerate and improve the long-term management of U.S. Light Water Reactor (LWR) spent nuclear fuel. The first step is to construct additional, centralized, engineered (dry cask) spent fuel facilities to have a backup solution to Yucca Mountain (YM) delays or lack of capacity. The second step is to restart the development of the Integral Fast Reactor (IFR), in a burner mode, because of its inherent safety characteristics and its extensive past development in contrast to Acceleration Driven Systems (ADS). The IFR and an improved non-proliferation version of its pyro-processingmore » technology can burn the plutonium (Pu) and minor actinides (MA) obtained by reprocessing LWR spent fuel. The remaining IFR and LWR fission products will be treated for storage at YM. The radiotoxicity of that high level waste (HLW) will fall below that of natural uranium in less than one thousand years. Due to anticipated increased capital, maintenance, and research costs for IFR, the third step is to reduce the required number of IFRs and their potential delays by implementing multiple recycles of Pu and Neptunium (Np) MA in LWR. That strategy is to use an advanced separation process, UREX+, and the MIX Pu option where the role and degradation of Pu is limited by uranium enrichment. UREX+ will decrease proliferation risks by avoiding Pu separation while the MIX fuel will lead to an equilibrium fuel recycle mode in LWR which will reduce U. S. Pu inventory and deliver much smaller volumes of less radioactive HLW to YM. In both steps two and three, Research and Development (R and D) is to emphasize the demonstration of multiple fuel reprocessing and fabrication, while improving HLW treatment, increasing proliferation resistance, and reducing losses of fissile material. The fourth step is to license and construct YM because it is needed for the disposal of defense wastes and the HLW to be generated under the proposed plan. The fifth step consists of developing a risk informed methodology to assess the various options available for disposition of LWR spent fuel and to select among them. The sixth step is to modify the current U. S. infrastructure and to create a climate to increase the utilization of uranium and the sustainability of nuclear generated electricity. (author)« less

  16. Influence of phase inversion on the formation and stability of one-step multiple emulsions.

    PubMed

    Morais, Jacqueline M; Rocha-Filho, Pedro A; Burgess, Diane J

    2009-07-21

    A novel method of preparation of water-in-oil-in-micelle-containing water (W/O/W(m)) multiple emulsions using the one-step emulsification method is reported. These multiple emulsions were normal (not temporary) and stable over a 60 day test period. Previously, reported multiple emulsion by the one-step method were abnormal systems that formed at the inversion point of simple emulsion (where there is an incompatibility in the Ostwald and Bancroft theories, and typically these are O/W/O systems). Pseudoternary phase diagrams and bidimensional process-composition (phase inversion) maps were constructed to assist in process and composition optimization. The surfactants used were PEG40 hydrogenated castor oil and sorbitan oleate, and mineral and vegetables oils were investigated. Physicochemical characterization studies showed experimentally, for the first time, the significance of the ultralow surface tension point on multiple emulsion formation by one-step via phase inversion processes. Although the significance of ultralow surface tension has been speculated previously, to the best of our knowledge, this is the first experimental confirmation. The multiple emulsion system reported here was dependent not only upon the emulsification temperature, but also upon the component ratios, therefore both the emulsion phase inversion and the phase inversion temperature were considered to fully explain their formation. Accordingly, it is hypothesized that the formation of these normal multiple emulsions is not a result of a temporary incompatibility (at the inversion point) during simple emulsion preparation, as previously reported. Rather, these normal W/O/W(m) emulsions are a result of the simultaneous occurrence of catastrophic and transitional phase inversion processes. The formation of the primary emulsions (W/O) is in accordance with the Ostwald theory ,and the formation of the multiple emulsions (W/O/W(m)) is in agreement with the Bancroft theory.

  17. Streptococcus oralis Neuraminidase Modulates Adherence to Multiple Carbohydrates on Platelets.

    PubMed

    Singh, Anirudh K; Woodiga, Shireen A; Grau, Margaret A; King, Samantha J

    2017-03-01

    Adherence to host surfaces is often mediated by bacterial binding to surface carbohydrates. Although it is widely appreciated that some bacterial species express glycosidases, previous studies have not considered whether bacteria bind to multiple carbohydrates within host glycans as they are modified by bacterial glycosidases. Streptococcus oralis is a leading cause of subacute infective endocarditis. Binding to platelets is a critical step in disease; however, the mechanisms utilized by S. oralis remain largely undefined. Studies revealed that S. oralis , like Streptococcus gordonii and Streptococcus sanguinis , binds platelets via terminal sialic acid. However, unlike those organisms, S. oralis produces a neuraminidase, NanA, which cleaves terminal sialic acid. Further studies revealed that following NanA-dependent removal of terminal sialic acid, S. oralis bound exposed β-1,4-linked galactose. Adherence to both these carbohydrates required Fap1, the S. oralis member of the serine-rich repeat protein (SRRP) family of adhesins. Mutation of a conserved residue required for sialic acid binding by other SRRPs significantly reduced platelet binding, supporting the hypothesis that Fap1 binds this carbohydrate. The mechanism by which Fap1 contributes to β-1,4-linked galactose binding remains to be defined; however, binding may occur via additional domains of unknown function within the nonrepeat region, one of which shares some similarity with a carbohydrate binding module. This study is the first demonstration that an SRRP is required to bind β-1,4-linked galactose and the first time that one of these adhesins has been shown to be required for binding of multiple glycan receptors. Copyright © 2017 American Society for Microbiology.

  18. Multiplexed Affinity-Based Separation of Proteins and Cells Using Inertial Microfluidics.

    PubMed

    Sarkar, Aniruddh; Hou, Han Wei; Mahan, Alison E; Han, Jongyoon; Alter, Galit

    2016-03-30

    Isolation of low abundance proteins or rare cells from complex mixtures, such as blood, is required for many diagnostic, therapeutic and research applications. Current affinity-based protein or cell separation methods use binary 'bind-elute' separations and are inefficient when applied to the isolation of multiple low-abundance proteins or cell types. We present a method for rapid and multiplexed, yet inexpensive, affinity-based isolation of both proteins and cells, using a size-coded mixture of multiple affinity-capture microbeads and an inertial microfluidic particle sorter device. In a single binding step, different targets-cells or proteins-bind to beads of different sizes, which are then sorted by flowing them through a spiral microfluidic channel. This technique performs continuous-flow, high throughput affinity-separation of milligram-scale protein samples or millions of cells in minutes after binding. We demonstrate the simultaneous isolation of multiple antibodies from serum and multiple cell types from peripheral blood mononuclear cells or whole blood. We use the technique to isolate low abundance antibodies specific to different HIV antigens and rare HIV-specific cells from blood obtained from HIV+ patients.

  19. A novel and simple test of gait adaptability predicts gold standard measures of functional mobility in stroke survivors.

    PubMed

    Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A

    2016-01-01

    Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Lunar Atmosphere and Dust Environment Explorer Integration and Test

    NASA Technical Reports Server (NTRS)

    Wright, Michael R.; McCormick, John L.; Hoffman, Richard G.

    2010-01-01

    Integration and test (I&T) of the Lunar Atmosphere and Dust Environment Explorer (LADEE) is presented. A collaborative NASA project between Goddard Space Flight Center and Ames Research Center, LADEE's mission is to explore the low lunar orbit environment and exosphere for constituents. Its instruments include two spectrometers, a dust detector, and a laser communication technology demonstration. Although a relatively low-cost spacecraft, LADEE has I&T requirements typical of most planetary probes, such as prelaunch contamination control, sterilization, and instrument calibration. To lead to a successful mission, I&T at the spacecraft, instrument, and observatory level must include step-by-step and end-to-end functional, environmental, and performance testing. Due to its compressed development schedule, LADEE I&T planning requires adjusting test flows and sequences to account for long-lead critical-path items and limited spares. A protoflight test-level strategy is also baselined. However, the program benefits from having two independent but collaborative teams of engineers, managers, and technicians that have a wealth of flight project experience. This paper summarizes the LADEE I&T planning, flow, facilities, and probe-unique processes. Coordination of requirements and approaches to I&T when multiple organizations are involved is discussed. Also presented are cost-effective approaches to I&T that are transferable to most any spaceflight project I&T program.

  1. Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks

    NASA Astrophysics Data System (ADS)

    Ho, Tzung-Hsien; Trisno, Sugianto; Smolyaninov, Igor I.; Milner, Stuart D.; Davis, Christopher C.

    2004-02-01

    Free space, dynamic, optical wireless communications will require topology control for optimization of network performance. Such networks may need to be configured for bi- or multiple-connectedness, reliability and quality-of-service. Topology control involves the introduction of new links and/or nodes into the network to achieve such performance objectives through autonomous reconfiguration as well as precise pointing, acquisition, tracking, and steering of laser beams. Reconfiguration may be required because of link degradation resulting from obscuration or node loss. As a result, the optical transceivers may need to be re-directed to new or existing nodes within the network and tracked on moving nodes. The redirection of transceivers may require operation over a whole sphere, so that small-angle beam steering techniques cannot be applied. In this context, we are studying the performance of optical wireless links using lightweight, bi-static transceivers mounted on high-performance stepping motor driven stages. These motors provide an angular resolution of 0.00072 degree at up to 80,000 steps per second. This paper focuses on the performance characteristics of these agile transceivers for pointing, acquisition, and tracking (PAT), including the influence of acceleration/deceleration time, motor angular speed, and angular re-adjustment, on latency and packet loss in small free space optical (FSO) wireless test networks.

  2. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces.

    PubMed

    Pantanowitz, Liron; Labranche, Wayne; Lareau, William

    2010-05-26

    Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR). Physician connectivity with the laboratory information system (LIS) is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS-EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1), followed by interface building (step 2) with subsequent testing (step 3), and finally ongoing maintenance (step 4). The role of organized project management, software as a service (SAAS), and alternate solutions for outreach connectivity are discussed.

  3. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces

    PubMed Central

    Pantanowitz, Liron; LaBranche, Wayne; Lareau, William

    2010-01-01

    Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR). Physician connectivity with the laboratory information system (LIS) is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS–EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1), followed by interface building (step 2) with subsequent testing (step 3), and finally ongoing maintenance (step 4). The role of organized project management, software as a service (SAAS), and alternate solutions for outreach connectivity are discussed. PMID:20805958

  4. Towards automated assistance for operating home medical devices.

    PubMed

    Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D

    2010-01-01

    To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.

  5. An Examination of Four Traditional School Physical Activity Models on Children's Step Counts and MVPA.

    PubMed

    Brusseau, Timothy A; Kulinna, Pamela H

    2015-03-01

    Schools have been identified as primary societal institutions for promoting children's physical activity (PA); however, limited evidence exists demonstrating which traditional school-based PA models maximize children's PA. The purpose of this study was to compare step counts and moderate-to-vigorous physical activity (MVPA) across 4 traditional school PA modules. Step count and MVPA data were collected on 5 consecutive school days from 298 children (Mage = 10.0 ± 0.6 years; 55% female) in Grade 5. PA was measured using the NL-1000 piezoelectric pedometer. The 4 models included (a) recess only, (b) multiple recesses, (c) recess and physical education (PE), and (d) multiple recesses and PE. Children accumulated the greatest PA on days that they had PE and multiple recess opportunities (5,242 ± 1,690 steps; 15.3 ± 8.8 min of MVPA). Children accumulated the least amount of PA on days with only 1 recess opportunity (3,312 ± 445 steps; 7.1 ± 2.3 min of MVPA). Across all models, children accumulated an additional 1,140 steps and 4.1 min of MVPA on PE days. It appears that PE is the most important school PA opportunity for maximizing children's PA. However, on days without PE, a 2nd recess can increase school PA by 20% (Δ = 850 steps; 3.8 min of MVPA).

  6. Estimating V0[subscript 2]max Using a Personalized Step Test

    ERIC Educational Resources Information Center

    Webb, Carrie; Vehrs, Pat R.; George, James D.; Hager, Ronald

    2014-01-01

    The purpose of this study was to develop a step test with a personalized step rate and step height to predict cardiorespiratory fitness in 80 college-aged males and females using the self-reported perceived functional ability scale and data collected during the step test. Multiple linear regression analysis yielded a model (R = 0.90, SEE = 3.43…

  7. SUMMIT (Serially Unified Multicenter Multiple Sclerosis Investigation): creating a repository of deeply phenotyped contemporary multiple sclerosis cohorts.

    PubMed

    Bove, Riley; Chitnis, Tanuja; Cree, Bruce Ac; Tintoré, Mar; Naegelin, Yvonne; Uitdehaag, Bernard Mj; Kappos, Ludwig; Khoury, Samia J; Montalban, Xavier; Hauser, Stephen L; Weiner, Howard L

    2017-08-01

    There is a pressing need for robust longitudinal cohort studies in the modern treatment era of multiple sclerosis. Build a multiple sclerosis (MS) cohort repository to capture the variability of disability accumulation, as well as provide the depth of characterization (clinical, radiologic, genetic, biospecimens) required to adequately model and ultimately predict a patient's course. Serially Unified Multicenter Multiple Sclerosis Investigation (SUMMIT) is an international multi-center, prospectively enrolled cohort with over a decade of comprehensive follow-up on more than 1000 patients from two large North American academic MS Centers (Brigham and Women's Hospital (Comprehensive Longitudinal Investigation of Multiple Sclerosis at the Brigham and Women's Hospital (CLIMB; BWH)) and University of California, San Francisco (Expression/genomics, Proteomics, Imaging, and Clinical (EPIC))). It is bringing online more than 2500 patients from additional international MS Centers (Basel (Universitätsspital Basel (UHB)), VU University Medical Center MS Center Amsterdam (MSCA), Multiple Sclerosis Center of Catalonia-Vall d'Hebron Hospital (Barcelona clinically isolated syndrome (CIS) cohort), and American University of Beirut Medical Center (AUBMC-Multiple Sclerosis Interdisciplinary Research (AMIR)). We provide evidence for harmonization of two of the initial cohorts in terms of the characterization of demographics, disease, and treatment-related variables; demonstrate several proof-of-principle analyses examining genetic and radiologic predictors of disease progression; and discuss the steps involved in expanding SUMMIT into a repository accessible to the broader scientific community.

  8. Clinical Importance of Steps Taken per Day among Persons with Multiple Sclerosis

    PubMed Central

    Motl, Robert W.; Pilutti, Lara A.; Learmonth, Yvonne C.; Goldman, Myla D.; Brown, Ted

    2013-01-01

    Background The number of steps taken per day (steps/day) provides a reliable and valid outcome of free-living walking behavior in persons with multiple sclerosis (MS). Objective This study examined the clinical meaningfulness of steps/day using the minimal clinically important difference (MCID) value across stages representing the developing impact of MS. Methods This study was a secondary analysis of de-identified data from 15 investigations totaling 786 persons with MS and 157 healthy controls. All participants provided demographic information and wore an accelerometer or pedometer during the waking hours of a 7-day period. Those with MS further provided real-life, health, and clinical information and completed the Multiple Sclerosis Walking Scale-12 (MSWS-12) and Patient Determined Disease Steps (PDDS) scale. MCID estimates were based on regression analyses and analysis of variance for between group differences. Results The mean MCID from self-report scales that capture subtle changes in ambulation (1-point change in PDSS scores and 10-point change in MSWS-12 scores) was 779 steps/day (14% of mean score for MS sample); the mean MCID for clinical/health outcomes (MS type, duration, weight status) was 1,455 steps/day (26% of mean score for MS sample); real-life anchors (unemployment, divorce, assistive device use) resulted in a mean MCID of 2,580 steps/day (45% of mean score for MS sample); and the MCID for the cumulative impact of MS (MS vs. control) was 2,747 steps/day (48% of mean score for MS sample). Conclusion The change in motion sensor output of ∼800 steps/day appears to represent a lower-bound estimate of clinically meaningful change in free-living walking behavior in interventions of MS. PMID:24023843

  9. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  10. The carboxyl-terminus directs TAF(I)48 to the nucleus and nucleolus and associates with multiple nuclear import receptors.

    PubMed

    Dynes, Joseph L; Xu, Shuping; Bothner, Sarah; Lahti, Jill M; Hori, Roderick T

    2004-03-01

    The protein complex Selectivity Factor 1, composed of TBP, TAF(I)48, TAF(I)63 and TAF(I)110, is required for rRNA transcription by RNA polymerase I in the nucleolus. The steps involved in targeting Selectivity Factor 1 will be dependent on the transport pathways that are used and the localization signals that direct this trafficking. In order to investigate these issues, we characterized human TAF(I)48, a subunit of Selectivity Factor 1. By domain analysis of TAF(I)48, the carboxyl-terminal 51 residues were found to be required for the localization of TAF(I)48, as well as sufficient to direct Green Fluorescent Protein to the nucleus and nucleolus. The carboxyl-terminus of TAF(I)48 also has the ability to associate with multiple members of the beta-karyopherin family of nuclear import receptors, including importin beta (karyopherin beta1), transportin (karyopherin beta2) and RanBP5 (karyopherin beta3), in a Ran-dependent manner. This property of interacting with multiple beta-karyopherins has been previously reported for the nuclear localization signals of some ribosomal proteins that are likewise directed to the nucleolus. This study identifies the first nuclear import sequence identified within the TBP-Associated Factor subunits of Selectivity Factor 1.

  11. Real-Time Visualization of Network Behaviors for Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, Daniel M.; Bohn, Shawn J.; Love, Douglas V.

    Plentiful, complex, and dynamic data make understanding the state of an enterprise network difficult. Although visualization can help analysts understand baseline behaviors in network traffic and identify off-normal events, visual analysis systems often do not scale well to operational data volumes (in the hundreds of millions to billions of transactions per day) nor to analysis of emergent trends in real-time data. We present a system that combines multiple, complementary visualization techniques coupled with in-stream analytics, behavioral modeling of network actors, and a high-throughput processing platform called MeDICi. This system provides situational understanding of real-time network activity to help analysts takemore » proactive response steps. We have developed these techniques using requirements gathered from the government users for which the tools are being developed. By linking multiple visualization tools to a streaming analytic pipeline, and designing each tool to support a particular kind of analysis (from high-level awareness to detailed investigation), analysts can understand the behavior of a network across multiple levels of abstraction.« less

  12. Translating Theory Into Practice: Implementing a Program of Assessment.

    PubMed

    Hauer, Karen E; O'Sullivan, Patricia S; Fitzhenry, Kristen; Boscardin, Christy

    2018-03-01

    A program of assessment addresses challenges in learner assessment using a centrally planned, coordinated approach that emphasizes assessment for learning. This report describes the steps taken to implement a program of assessment framework within a medical school. A literature review on best practices in assessment highlighted six principles that guided implementation of the program of assessment in 2016-2017: (1) a centrally coordinated plan for assessment aligns with and supports a curricular vision; (2) multiple assessment tools used longitudinally generate multiple data points; (3) learners require ready access to information-rich feedback to promote reflection and informed self-assessment; (4) mentoring is essential to facilitate effective data use for reflection and learning planning; (5) the program of assessment fosters self-regulated learning behaviors; and (6) expert groups make summative decisions about grades and readiness for advancement. Implementation incorporated stakeholder engagement, use of multiple assessment tools, design of a coaching program, and creation of a learner performance dashboard. The assessment team monitors adherence to principles defining the program of assessment and gathers and responds to regular feedback from key stakeholders, including faculty, staff, and students. Next steps include systematically collecting evidence for validity of individual assessments and the program overall. Iterative review of student performance data informs curricular improvements. The program of assessment also highlights technology needs that will be addressed with information technology experts. The outcome ultimately will entail showing evidence of validity that the program produces physicians who engage in lifelong learning and provide high-quality patient care.

  13. Specific arithmetic calculation deficits in children with Turner syndrome.

    PubMed

    Rovet, J; Szekely, C; Hockenberry, M N

    1994-12-01

    Study 1 compared arithmetic processing skills on the WRAT-R in 45 girls with Turner syndrome (TS) and 92 age-matched female controls. Results revealed significant underachievement by subjects with TS, which reflected their poorer performance on problems requiring the retrieval of addition and multiplication facts and procedural knowledge for addition and division operations. TS subjects did not differ qualitatively from controls in type of procedural error committed. Study 2, which compared the performance of 10 subjects with TS and 31 controls on the Keymath Diagnostic Arithmetic Test, showed that the TS group had less adequate knowledge of arithmetic, subtraction, and multiplication procedures but did not differ from controls on Fact items. Error analyses revealed that TS subjects were more likely to confuse component steps or fail to separate intermediate steps or to complete problems. TS subjects relied to a greater degree on verbal than visual-spatial abilities in arithmetic processing while their visual-spatial abilities were associated with retrieval of simple multidigit addition facts and knowledge of subtraction, multiplication, and division procedures. Differences between the TS and control groups increased with age for Keymath, but not WRAT-R, procedures. Discrepant findings are related to the different task constraints (timed vs. untimed, single vs. alternate versions, size of item pool) and the use of different strategies (counting vs. fact retrieval). It is concluded that arithmetic difficulties in females with TS are due to less adequate procedural skills, combined with poorer fact retrieval in timed testing situations, rather than to inadequate visual-spatial abilities.

  14. Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments

    NASA Astrophysics Data System (ADS)

    Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon

    2013-04-01

    A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.

  15. A method for real-time generation of augmented reality work instructions via expert movements

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Bhaskar; Winer, Eliot

    2015-03-01

    Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.

  16. Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip

    PubMed Central

    Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.

    2017-01-01

    Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028

  17. History, rare, and multiple events of mechanical unfolding of repeat proteins

    NASA Astrophysics Data System (ADS)

    Sumbul, Fidan; Marchesi, Arin; Rico, Felix

    2018-03-01

    Mechanical unfolding of proteins consisting of repeat domains is an excellent tool to obtain large statistics. Force spectroscopy experiments using atomic force microscopy on proteins presenting multiple domains have revealed that unfolding forces depend on the number of folded domains (history) and have reported intermediate states and rare events. However, the common use of unspecific attachment approaches to pull the protein of interest holds important limitations to study unfolding history and may lead to discarding rare and multiple probing events due to the presence of unspecific adhesion and uncertainty on the pulling site. Site-specific methods that have recently emerged minimize this uncertainty and would be excellent tools to probe unfolding history and rare events. However, detailed characterization of these approaches is required to identify their advantages and limitations. Here, we characterize a site-specific binding approach based on the ultrastable complex dockerin/cohesin III revealing its advantages and limitations to assess the unfolding history and to investigate rare and multiple events during the unfolding of repeated domains. We show that this approach is more robust, reproducible, and provides larger statistics than conventional unspecific methods. We show that the method is optimal to reveal the history of unfolding from the very first domain and to detect rare events, while being more limited to assess intermediate states. Finally, we quantify the forces required to unfold two molecules pulled in parallel, difficult when using unspecific approaches. The proposed method represents a step forward toward more reproducible measurements to probe protein unfolding history and opens the door to systematic probing of rare and multiple molecule unfolding mechanisms.

  18. 40 CFR 141.133 - Compliance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specified by § 141.135(c). Systems may begin monitoring to determine whether Step 1 TOC removals can be met... the Step 1 requirements in § 141.135(b)(2) and must therefore apply for alternate minimum TOC removal (Step 2) requirements, is not eligible for retroactive approval of alternate minimum TOC removal (Step 2...

  19. 15 CFR 732.6 - Steps for other requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Steps for other requirements. 732.6...) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS STEPS FOR USING THE EAR § 732.6 Steps for other requirements. Sections 732.1 through 732.4 of this part are useful in...

  20. Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.

    PubMed

    Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B

    2017-01-01

    Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.

  1. Multifunctional nanocomposite hollow fiber membranes by solvent transfer induced phase separation.

    PubMed

    Haase, Martin F; Jeon, Harim; Hough, Noah; Kim, Jong Hak; Stebe, Kathleen J; Lee, Daeyeon

    2017-11-01

    The decoration of porous membranes with a dense layer of nanoparticles imparts useful functionality and can enhance membrane separation and anti-fouling properties. However, manufacturing of nanoparticle-coated membranes requires multiple steps and tedious processing. Here, we introduce a facile single-step method in which bicontinuous interfacially jammed emulsions are used to form nanoparticle-functionalized hollow fiber membranes. The resulting nanocomposite membranes prepared via solvent transfer-induced phase separation and photopolymerization have exceptionally high nanoparticle loadings (up to 50 wt% silica nanoparticles) and feature densely packed nanoparticles uniformly distributed over the entire membrane surfaces. These structurally well-defined, asymmetric membranes facilitate control over membrane flux and selectivity, enable the formation of stimuli responsive hydrogel nanocomposite membranes, and can be easily modified to introduce antifouling features. This approach forms a foundation for the formation of advanced nanocomposite membranes comprising diverse building blocks with potential applications in water treatment, industrial separations and as catalytic membrane reactors.

  2. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  3. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  4. A supramolecular approach to fabricate highly emissive smart materials

    PubMed Central

    Liu, Kai; Yao, Yuxing; Kang, Yuetong; Liu, Yu; Han, Yuchun; Wang, Yilin; Li, Zhibo; Zhang, Xi

    2013-01-01

    The aromatic chromophores, for example, perylene diimides (PDIs) are well known for their desirable absorption and emission properties. However, their stacking nature hinders the exploitation of these properties and further applications. To fabricate emissive aggregates or solid-state materials, it has been common practice to decrease the degree of stacking of PDIs by incorporating substituents into the parent aromatic ring. However, such practice often involves difficultorganic synthesis with multiple steps. A supramolecular approach is established here to fabricate highly fluorescent and responsive soft materials, which has greatly decreases the number of required synthetic steps and also allows for a system with switchable photophysical properties. The highly fluorescent smart material exhibits great adaptivity and can be used as a supramolecular sensor for the rapid detection of spermine with high sensitivity and selectivity, which is crucial for the early diagnosis of malignant tumors. PMID:23917964

  5. Multilayer on-chip stacked Fresnel zone plates: Hard x-ray fabrication and soft x-ray simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kenan; Wojcik, Michael J.; Ocola, Leonidas E.

    2015-11-01

    Fresnel zone plates are widely used as x-ray nanofocusing optics. To achieve high spatial resolution combined with good focusing efficiency, high aspect ratio nanolithography is required, and one way to achieve that is through multiple e-beam lithography writing steps to achieve on-chip stacking. A two-step writing process producing 50 nm finest zone width at a zone thickness of 1.14 µm for possible hard x-ray applications is shown here. The authors also consider in simulations the case of soft x-ray focusing where the zone thickness might exceed the depth of focus. In this case, the authors compare on-chip stacking with, andmore » without, adjustment of zone positions and show that the offset zones lead to improved focusing efficiency. The simulations were carried out using a multislice propagation method employing Hankel transforms.« less

  6. Evidence integration in model-based tree search

    PubMed Central

    Solway, Alec; Botvinick, Matthew M.

    2015-01-01

    Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932

  7. Adaptive evolution of complex innovations through stepwise metabolic niche expansion.

    PubMed

    Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A; Lercher, Martin J; Pál, Csaba; Papp, Balázs

    2016-05-20

    A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes.

  8. Adaptive evolution of complex innovations through stepwise metabolic niche expansion

    PubMed Central

    Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A.; Lercher, Martin J.; Pál, Csaba; Papp, Balázs

    2016-01-01

    A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes. PMID:27197754

  9. High efficiency x-ray nanofocusing by the blazed stacking of binary zone plates

    NASA Astrophysics Data System (ADS)

    Mohacsi, I.; Karvinen, P.; Vartiainen, I.; Diaz, A.; Somogyi, A.; Kewish, C. M.; Mercere, P.; David, C.

    2013-09-01

    The focusing efficiency of binary Fresnel zone plate lenses is fundamentally limited and higher efficiency requires a multi step lens profile. To overcome the manufacturing problems of high resolution and high efficiency multistep zone plates, we investigate the concept of stacking two different binary zone plates in each other's optical near-field. We use a coarse zone plate with π phase shift and a double density fine zone plate with π/2 phase shift to produce an effective 4- step profile. Using a compact experimental setup with piezo actuators for alignment, we demonstrated 47.1% focusing efficiency at 6.5 keV using a pair of 500 μm diameter and 200 nm smallest zone width. Furthermore, we present a spatially resolved characterization method using multiple diffraction orders to identify manufacturing errors, alignment errors and pattern distortions and their effect on diffraction efficiency.

  10. Single molecule targeted sequencing for cancer gene mutation detection.

    PubMed

    Gao, Yan; Deng, Liwei; Yan, Qin; Gao, Yongqian; Wu, Zengding; Cai, Jinsen; Ji, Daorui; Li, Gailing; Wu, Ping; Jin, Huan; Zhao, Luyang; Liu, Song; Ge, Liangjin; Deem, Michael W; He, Jiankui

    2016-05-19

    With the rapid decline in cost of sequencing, it is now affordable to examine multiple genes in a single disease-targeted clinical test using next generation sequencing. Current targeted sequencing methods require a separate step of targeted capture enrichment during sample preparation before sequencing. Although there are fast sample preparation methods available in market, the library preparation process is still relatively complicated for physicians to use routinely. Here, we introduced an amplification-free Single Molecule Targeted Sequencing (SMTS) technology, which combined targeted capture and sequencing in one step. We demonstrated that this technology can detect low-frequency mutations using artificially synthesized DNA sample. SMTS has several potential advantages, including simple sample preparation thus no biases and errors are introduced by PCR reaction. SMTS has the potential to be an easy and quick sequencing technology for clinical diagnosis such as cancer gene mutation detection, infectious disease detection, inherited condition screening and noninvasive prenatal diagnosis.

  11. An introduction to data reduction: space-group determination, scaling and intensity statistics.

    PubMed

    Evans, Philip R

    2011-04-01

    This paper presents an overview of how to run the CCP4 programs for data reduction (SCALA, POINTLESS and CTRUNCATE) through the CCP4 graphical interface ccp4i and points out some issues that need to be considered, together with a few examples. It covers determination of the point-group symmetry of the diffraction data (the Laue group), which is required for the subsequent scaling step, examination of systematic absences, which in many cases will allow inference of the space group, putting multiple data sets on a common indexing system when there are alternatives, the scaling step itself, which produces a large set of data-quality indicators, estimation of |F| from intensity and finally examination of intensity statistics to detect crystal pathologies such as twinning. An appendix outlines the scoring schemes used by the program POINTLESS to assign probabilities to possible Laue and space groups.

  12. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  13. What Hansel and Gretel’s Trail Teach Us about Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne Simpson; Troy Hiltbrand

    Background At Idaho National Laboratory (INL), we are on the cusp of a significant era of change. INL is the lead Department of Energy Nuclear Research and Development Laboratory, focused on finding innovative solutions to the nation’s energy challenges. Not only has the Laboratory grown at an unprecedented rate over the last five years, but also has a significant segment of its workforce that is ready for retirement. Over the next 10 years, it is anticipated that upwards of 60% of the current workforce at INL will be eligible for retirement. Since the Laboratory is highly dependent on the intellectualmore » capabilities of its scientists and engineers and their efforts to ensure the future of the nation’s energy portfolio, this attrition of resources has the potential of seriously impacting the ability of the Laboratory to sustain itself and the growth that it has achieved in the past years. Similar to Germany in the early nineteenth century, we face the challenge of our self-identity and must find a way to solidify our legacy to propel us into the future. Approach As the Brothers Grimm set out to collect their fairy tales, they focused on gathering information from the people that were most knowledgeable in the subject. For them, it was the peasants, with their rich knowledge of the region’s sub-culture of folk lore that was passed down from generation to generation around the evening fire. As we look to capture this tacit knowledge, it is requisite that we also seek this information from those individuals that are most versed in it. In our case, it is the scientists and researchers who have dedicated their lives to providing the nation with nuclear energy. This information comes in many forms, both digital and non-digital. Some of this information still resides in the minds of these scientists and researchers who are close to retirement, or who have already retired. Once the information has been collected, it has to be sorted through to identify where the “shining stones” can be found. The quantity of this information makes it improbable for an individual or set of individuals to sort through it and pick out those ideas which are most important. To accomplish both the step of information capture and classification, modern advancements in technology give us the tools that we need to successfully capture this tacit knowledge. To assist in this process, we have evaluated multiple tools and methods that will help us to unlock the power of tacit knowledge. Tools The first challenge that stands in the way of success is the capture of information. More than 50 years of nuclear research is captured in log books, microfiche, and other non-digital formats. To transform this information from its current form into a format that can “shine,” requires a number of different tools. These tools fall into three major categories: Information Capture, Content Retrieval, and Information Classification. Information Capture The first step is to capture the information from a myriad of sources. With knowledge existing in multiple formats, this step requires multiple approaches to be successful. Some of the sources that require consideration include handwritten documents, typed documents, microfiche, images, audio and video feeds, and electronic images. To make this step feasible for a large body of knowledge requires automation.« less

  14. 49 CFR 399.207 - Truck and truck-tractor access requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...

  15. 49 CFR 399.207 - Truck and truck-tractor access requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...

  16. 49 CFR 399.207 - Truck and truck-tractor access requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...

  17. 49 CFR 399.207 - Truck and truck-tractor access requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...

  18. Multiple stage miniature stepping motor

    DOEpatents

    Niven, William A.; Shikany, S. David; Shira, Michael L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.

  19. Fast matrix multiplication and its algebraic neighbourhood

    NASA Astrophysics Data System (ADS)

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  20. Rational design of capillary-driven flows for paper-based microfluidics.

    PubMed

    Elizalde, Emanuel; Urteaga, Raúl; Berli, Claudio L A

    2015-05-21

    The design of paper-based assays that integrate passive pumping requires a precise programming of the fluid transport, which has to be encoded in the geometrical shape of the substrate. This requirement becomes critical in multiple-step processes, where fluid handling must be accurate and reproducible for each operation. The present work theoretically investigates the capillary imbibition in paper-like substrates to better understand fluid transport in terms of the macroscopic geometry of the flow domain. A fluid dynamic model was derived for homogeneous porous substrates with arbitrary cross-sectional shapes, which allows one to determine the cross-sectional profile required for a prescribed fluid velocity or mass transport rate. An extension of the model to slit microchannels is also demonstrated. Calculations were validated by experiments with prototypes fabricated in our lab. The proposed method constitutes a valuable tool for the rational design of paper-based assays.

  1. The role of template superhelicity in the initiation of bacteriophage lambda DNA replication.

    PubMed Central

    Alfano, C; McMacken, R

    1988-01-01

    The prepriming steps in the initiation of bacteriophage lambda DNA replication depend on the action of the lambda O and P proteins and on the DnaB helicase, single-stranded DNA binding protein (SSB), and DnaJ and DnaK heat shock proteins of the E. coli host. The binding of multiple copies of the lambda O protein to the phage replication origin (ori lambda) initiates the ordered assembly of a series of nucleoprotein structures that form at ori lambda prior to DNA unwinding, priming and DNA synthesis steps. Since the initiation of lambda DNA replication is known to occur only on supercoiled templates in vivo and in vitro, we examined how the early steps in lambda DNA replication are influenced by superhelical tension. All initiation complexes formed prior to helicase-mediated DNA-unwinding form with high efficiency on relaxed ori lambda DNA. Nonetheless, the DNA templates in these structures must be negatively supertwisted before they can be replicated. Once DNA helicase unwinding is initiated at ori lambda, however, later steps in lambda DNA replication proceed efficiently in the absence of superhelical tension. We conclude that supercoiling is required during the initiation of lambda DNA replication to facilitate entry of a DNA helicase, presumably the DnaB protein, between the DNA strands. Images PMID:2847118

  2. QUICR-learning for Multi-Agent Coordination

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2006-01-01

    Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.

  3. Walk Ratio (Step Length/Cadence) as a Summary Index of Neuromotor Control of Gait: Application to Multiple Sclerosis

    ERIC Educational Resources Information Center

    Rota, Viviana; Perucca, Laura; Simone, Anna; Tesio, Luigi

    2011-01-01

    In healthy adults, the step length/cadence ratio [walk ratio (WR) in mm/(steps/min) and normalized for height] is known to be constant around 6.5 mm/(step/min). It is a speed-independent index of the overall neuromotor gait control, in as much as it reflects energy expenditure, balance, between-step variability, and attentional demand. The speed…

  4. Community Digital Library Requirements for the Southern California Earthquake Center Community Modeling Environment (SCEC/CME)

    NASA Astrophysics Data System (ADS)

    Moore, R.; Faerman, M.; Minster, J.; Day, S. M.; Ely, G.

    2003-12-01

    A community digital library provides support for ingestion, organization, description, preservation, and access of digital entities. The technologies that traditionally provide these capabilities are digital libraries (ingestion, organization, description), persistent archives (preservation) and data grids (access). We present a design for the SCEC community digital library that incorporates aspects of all three systems. Multiple groups have created integrated environments that sustain large-scale scientific data collections. By examining these projects, the following stages of implementation can be identified: \\begin{itemize} Definition of semantic terms to associate with relevant information. This includes definition of uniform content descriptors to describe physical quantities relevant to the scientific discipline, and creation of concept spaces to define how the uniform content descriptors are logically related. Organization of digital entities into logical collections that make it simple to browse and manage related material. Definition of services that are used to access and manipulate material in the collection. Creation of a preservation environment for the long-term management of the collection. Each community is faced with heterogeneity that is introduced when data is distributed across multiple sites, or when multiple sets of collection semantics are used, and or when multiple scientific sub-disciplines are federated. We will present the relevant standards that simplify the implementation of the SCEC community library, the resource requirements for different types of data sets that drive the implementation, and the digital library processes that the SCEC community library will support. The SCEC community library can be viewed as the set of processing steps that are required to build the appropriate SCEC reference data sets (SCEC approved encoding format, SCEC approved descriptive metadata, SCEC approved collection organization, and SCEC managed storage location). Each digital entity that is ingested into the SCEC community library is processed and validated for conformance to SCEC standards. These steps generate provenance, descriptive, administrative, structural, and behavioral metadata. Using data grid technology, the descriptive metadata can be registered onto a logical name space that is controlled and managed by the SCEC digital library. A version of the SCEC community digital library is being implemented in the Storage Resource Broker. The SRB system provides almost all the features enumerated above. The peer-to-peer federation of metadata catalogs is planned for release in September, 2003. The SRB system is in production use in multiple projects, from high-energy physics, to astronomy, to earth systems science, to bio-informatics. The SCEC community library will be based on the definition of standard metadata attributes, the creation of logical collections within the SRB, the creation of access services, and the demonstration of a preservation environment. The use of the SRB for the SCEC digital library will sustain the expected collection size and collection capabilities.

  5. Inkjet Deposition of Layer by Layer Assembled Films

    PubMed Central

    Andres, Christine M.; Kotov, Nicholas A.

    2010-01-01

    Layer-by-layer assembly (LBL) can create advanced composites with exceptional properties unavailable by other means, but the laborious deposition process and multiple dipping cycles hamper their utilization in microtechnologies and electronics. Multiple rinse steps provide both structural control and thermodynamic stability to LBL multilayers but they significantly limit their practical applications and contribute significantly to the processing time and waste. Here we demonstrate that by employing inkjet technology one can deliver the necessary quantities of LBL components required for film build-up without excess, eliminating the need for repetitive rinsing steps. This feature differentiates this approach from all other recognized LBL modalities. Using a model system of negatively charged gold nanoparticles and positively charged poly(diallyldimethylammonium) chloride, the material stability, nanoscale control over thickness and particle coverage offered by the inkjet LBL technique are shown to be equal or better than the multilayers made with traditional dipping cycles. The opportunity for fast deposition of complex metallic patterns using a simple inkjet printer was also shown. The additive nature of LBL deposition based on the formation of insoluble nanoparticle-polyelectrolyte complexes of various compositions provides an excellent opportunity for versatile, multi-component, and non-contact patterning for the simple production of stratified patterns that are much needed in advanced devices. PMID:20863114

  6. Rapid and Sensitive Isothermal Detection of Nucleic-acid Sequence by Multiple Cross Displacement Amplification.

    PubMed

    Wang, Yi; Wang, Yan; Ma, Ai-Jing; Li, Dong-Xun; Luo, Li-Juan; Liu, Dong-Xin; Jin, Dong; Liu, Kai; Ye, Chang-Yun

    2015-07-08

    We have devised a novel amplification strategy based on isothermal strand-displacement polymerization reaction, which was termed multiple cross displacement amplification (MCDA). The approach employed a set of ten specially designed primers spanning ten distinct regions of target sequence and was preceded at a constant temperature (61-65 °C). At the assay temperature, the double-stranded DNAs were at dynamic reaction environment of primer-template hybrid, thus the high concentration of primers annealed to the template strands without a denaturing step to initiate the synthesis. For the subsequent isothermal amplification step, a series of primer binding and extension events yielded several single-stranded DNAs and single-stranded single stem-loop DNA structures. Then, these DNA products enabled the strand-displacement reaction to enter into the exponential amplification. Three mainstream methods, including colorimetric indicators, agarose gel electrophoresis and real-time turbidity, were selected for monitoring the MCDA reaction. Moreover, the practical application of the MCDA assay was successfully evaluated by detecting the target pathogen nucleic acid in pork samples, which offered advantages on quick results, modest equipment requirements, easiness in operation, and high specificity and sensitivity. Here we expounded the basic MCDA mechanism and also provided details on an alternative (Single-MCDA assay, S-MCDA) to MCDA technique.

  7. Inverting the planning gradient: adjustment of grasps to late segments of multi-step object manipulations.

    PubMed

    Mathew, Hanna; Kunde, Wilfried; Herbort, Oliver

    2017-05-01

    When someone grasps an object, the grasp depends on the intended object manipulation and usually facilitates it. If several object manipulation steps are planned, the first step has been reported to primarily determine the grasp selection. We address whether the grasp can be aligned to the second step, if the second step's requirements exceed those of the first step. Participants grasped and rotated a dial first by a small extent and then by various extents in the opposite direction, without releasing the dial. On average, when the requirements of the first and the second step were similar, participants mostly aligned the grasp to the first step. When the requirements of the second step were considerably higher, participants aligned the grasp to the second step, even though the first step still had a considerable impact. Participants employed two different strategies. One subgroup initially aligned the grasp to the first step and then ceased adjusting the grasp to either step. Another group also initially aligned the grasp to the first step and then switched to aligning it primarily to the second step. The data suggest that participants are more likely to switch to the latter strategy when they experienced more awkward arm postures. In summary, grasp selections for multi-step object manipulations can be aligned to the second object manipulation step, if the requirements of this step clearly exceed those of the first step and if participants have some experience with the task.

  8. W/O/W multiple emulsions with diclofenac sodium.

    PubMed

    Lindenstruth, Kai; Müller, Bernd W

    2004-11-01

    The disperse oil droplets of W/O/W multiple emulsions contain small water droplets, in which drugs could be incorporated, but the structure of these emulsions is also the reason for possible instability. Due to the middle oil phase which acts as a 'semipermeable' membrane the passage of water across the oil phase can take place. However, the emulsions have been produced in a two-step-production process so not only the leakage of encapsulated drug molecules out of the inner water phase during storage but also a production-induced reduction of the encapsulation rate should be considered. The aim of this study was to ascertain how far the production-induced reduction of the encapsulation rate relates to the size of inner water droplets and to evaluate the relevance of multiple emulsions as drug carrier for diclofenac sodium. Therefore multiple emulsions were produced according to a central composite design. During the second production step it was observed that the parameters pressure and temperature have an influence on the size of the oil droplets in the W/O/W multiple emulsions. Further experiments with different W/O emulsions resulted in W/O/W multiple emulsions with different encapsulation rates of diclofenac sodium, due to the different sizes of the inner water droplets, which were obtained in the first production step.

  9. Multiple-Step Injection Molding for Fibrin-Based Tissue-Engineered Heart Valves

    PubMed Central

    Weber, Miriam; Gonzalez de Torre, Israel; Moreira, Ricardo; Frese, Julia; Oedekoven, Caroline; Alonso, Matilde; Rodriguez Cabello, Carlos J.

    2015-01-01

    Heart valves are elaborate and highly heterogeneous structures of the circulatory system. Despite the well accepted relationship between the structural and mechanical anisotropy and the optimal function of the valves, most approaches to create tissue-engineered heart valves (TEHVs) do not try to mimic this complexity and rely on one homogenous combination of cells and materials for the whole construct. The aim of this study was to establish an easy and versatile method to introduce spatial diversity into a heart valve fibrin scaffold. We developed a multiple-step injection molding process that enables the fabrication of TEHVs with heterogeneous composition (cell/scaffold material) of wall and leaflets without the need of gluing or suturing components together, with the leaflets firmly connected to the wall. The integrity of the valves and their functionality was proved by either opening/closing cycles in a bioreactor (proof of principle without cells) or with continuous stimulation over 2 weeks. We demonstrated the potential of the method by the two-step molding of the wall and the leaflets containing different cell lines. Immunohistology after stimulation confirmed tissue formation and demonstrated the localization of the different cell types. Furthermore, we showed the proof of principle fabrication of valves using different materials for wall (fibrin) and leaflets (hybrid gel of fibrin/elastin-like recombinamer) and with layered leaflets. The method is easy to implement, does not require special facilities, and can be reproduced in any tissue-engineering lab. While it has been demonstrated here with fibrin, it can easily be extended to other hydrogels. PMID:25654448

  10. Multiple-Step Injection Molding for Fibrin-Based Tissue-Engineered Heart Valves.

    PubMed

    Weber, Miriam; Gonzalez de Torre, Israel; Moreira, Ricardo; Frese, Julia; Oedekoven, Caroline; Alonso, Matilde; Rodriguez Cabello, Carlos J; Jockenhoevel, Stefan; Mela, Petra

    2015-08-01

    Heart valves are elaborate and highly heterogeneous structures of the circulatory system. Despite the well accepted relationship between the structural and mechanical anisotropy and the optimal function of the valves, most approaches to create tissue-engineered heart valves (TEHVs) do not try to mimic this complexity and rely on one homogenous combination of cells and materials for the whole construct. The aim of this study was to establish an easy and versatile method to introduce spatial diversity into a heart valve fibrin scaffold. We developed a multiple-step injection molding process that enables the fabrication of TEHVs with heterogeneous composition (cell/scaffold material) of wall and leaflets without the need of gluing or suturing components together, with the leaflets firmly connected to the wall. The integrity of the valves and their functionality was proved by either opening/closing cycles in a bioreactor (proof of principle without cells) or with continuous stimulation over 2 weeks. We demonstrated the potential of the method by the two-step molding of the wall and the leaflets containing different cell lines. Immunohistology after stimulation confirmed tissue formation and demonstrated the localization of the different cell types. Furthermore, we showed the proof of principle fabrication of valves using different materials for wall (fibrin) and leaflets (hybrid gel of fibrin/elastin-like recombinamer) and with layered leaflets. The method is easy to implement, does not require special facilities, and can be reproduced in any tissue-engineering lab. While it has been demonstrated here with fibrin, it can easily be extended to other hydrogels.

  11. An evaluation of a reagentless method for the determination of total mercury in aquatic life

    USGS Publications Warehouse

    Haynes, Sekeenia; Gragg, Richard D.; Johnson, Elijah; Robinson, Larry; Orazio, Carl E.

    2006-01-01

    Multiple treatment (i.e., drying, chemical digestion, and oxidation) steps are often required during preparation of biological matrices for quantitative analysis of mercury; these multiple steps could potentially lead to systematic errors and poor recovery of the analyte. In this study, the Direct Mercury Analyzer (Milestone Inc., Monroe, CT) was utilized to measure total mercury in fish tissue by integrating steps of drying, sample combustion and gold sequestration with successive identification using atomic absorption spectrometry. We also evaluated the differences between the mercury concentrations found in samples that were homogenized and samples with no preparation. These results were confirmed with cold vapor atomic absorbance and fluorescence spectrometric methods of analysis. Finally, total mercury in wild captured largemouth bass (n = 20) were assessed using the Direct Mercury Analyzer to examine internal variability between mercury concentrations in muscle, liver and brain organs. Direct analysis of total mercury measured in muscle tissue was strongly correlated with muscle tissue that was homogenized before analysis (r = 0.81, p < 0.0001). Additionally, results using this integrated method compared favorably (p < 0.05) with conventional cold vapor spectrometry with atomic absorbance and fluorescence detection methods. Mercury concentrations in brain were significantly lower than concentrations in muscle (p < 0.001) and liver (p < 0.05) tissues. This integrated method can measure a wide range of mercury concentrations (0-500 ??g) using small sample sizes. Total mercury measurements in this study are comparative to the methods (cold vapor) commonly used for total mercury analysis and are devoid of laborious sample preparation and expensive hazardous waste. ?? Springer 2006.

  12. Engine out of the Chassis: Cell-Free Protein Synthesis and its Uses

    PubMed Central

    Rosenblum, Gabriel; Cooperman, Barry S.

    2013-01-01

    The translation machinery is the engine of life. Extracting the cytoplasmic milieu from a cell affords a lysate capable of producing proteins in concentrations reaching tens of micromolar. Such lysates, derivable from a variety of cells, allow the facile addition and subtraction of components that are directly or indirectly related to the translation machinery and/or the over-expressed protein. The flexible nature of such cell-free expression systems, when coupled with high throughput monitoring, can be especially suitable for protein engineering studies, allowing one to bypass multiple steps typically required using conventional in vivo protein expression. PMID:24161673

  13. COMPACT CASCADE IMPACTS

    DOEpatents

    Lippmann, M.

    1964-04-01

    A cascade particle impactor capable of collecting particles and distributing them according to size is described. In addition the device is capable of collecting on a pair of slides a series of different samples so that less time is required for the changing of slides. Other features of the device are its compactness and its ruggedness making it useful under field conditions. Essentially the unit consists of a main body with a series of transverse jets discharging on a pair of parallel, spaced glass plates. The plates are capable of being moved incremental in steps to obtain the multiple samples. (AEC)

  14. Multilevel resistive information storage and retrieval

    DOEpatents

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  15. Do You Know What I Feel? A First Step towards a Physiological Measure of the Subjective Well-Being of Persons with Profound Intellectual and Multiple Disabilities

    ERIC Educational Resources Information Center

    Vos, Pieter; De Cock, Paul; Petry, Katja; Van Den Noortgate, Wim; Maes, Bea

    2010-01-01

    Background: Because of limited communicative skills, it is not self-evident to measure subjective well-being in people with profound intellectual and multiple disabilities. As a first step towards a non-interpretive measure of subjective well-being, we explored how the respiratory, cardiovascular and electro dermal response systems were associated…

  16. STEP Experiment Requirements

    NASA Technical Reports Server (NTRS)

    Brumfield, M. L. (Compiler)

    1984-01-01

    A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.

  17. Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2016-12-01

    A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.

  18. Rapid non-enzymatic extraction method for isolating PCR-quality camelpox virus DNA from skin.

    PubMed

    Yousif, A Ausama; Al-Naeem, A Abdelmohsen; Al-Ali, M Ahmad

    2010-10-01

    Molecular diagnostic investigations of orthopoxvirus (OPV) infections are performed using a variety of clinical samples including skin lesions, tissues from internal organs, blood and secretions. Skin samples are particularly convenient for rapid diagnosis and molecular epidemiological investigations of camelpox virus (CMLV). Classical extraction procedures and commercial spin-column-based kits are time consuming, relatively expensive, and require multiple extraction and purification steps in addition to proteinase K digestion. A rapid non-enzymatic procedure for extracting CMLV DNA from dried scabs or pox lesions was developed to overcome some of the limitations of the available DNA extraction techniques. The procedure requires as little as 10mg of tissue and produces highly purified DNA [OD(260)/OD(280) ratios between 1.47 and 1.79] with concentrations ranging from 6.5 to 16 microg/ml. The extracted CMLV DNA was proven suitable for virus-specific qualitative and, semi-quantitative PCR applications. Compared to spin-column and conventional viral DNA extraction techniques, the two-step extraction procedure saves money and time, and retains the potential for automation without compromising CMLV PCR sensitivity. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  19. Magnetic timing valves for fluid control in paper-based microfluidics.

    PubMed

    Li, Xiao; Zwanenburg, Philip; Liu, Xinyu

    2013-07-07

    Multi-step analytical tests, such as an enzyme-linked immunosorbent assay (ELISA), require delivery of multiple fluids into a reaction zone and counting the incubation time at different steps. This paper presents a new type of paper-based magnetic valves that can count the time and turn on or off a fluidic flow accordingly, enabling timed fluid control in paper-based microfluidics. The timing capability of these valves is realized using a paper timing channel with an ionic resistor, which can detect the event of a solution flowing through the resistor and trigger an electromagnet (through a simple circuit) to open or close a paper cantilever valve. Based on this principle, we developed normally-open and normally-closed valves with a timing period up to 30.3 ± 2.1 min (sufficient for an ELISA on paper-based platforms). Using the normally-open valve, we performed an enzyme-based colorimetric reaction commonly used for signal readout of ELISAs, which requires a timed delivery of an enzyme substrate to a reaction zone. This design adds a new fluid-control component to the tool set for developing paper-based microfluidic devices, and has the potential to improve the user-friendliness of these devices.

  20. Preparation of holo- and malonyl-[acyl-carrier-protein] in a manner suitable for analog development.

    PubMed

    Marcella, Aaron M; Jing, Fuyuan; Barb, Adam W

    2015-11-01

    The fatty acid biosynthetic pathway generates highly reduced carbon based molecules. For this reason fatty acid synthesis is a target of pathway engineering to produce novel specialty or commodity chemicals using renewable techniques to supplant molecules currently derived from petroleum. Malonyl-[acyl carrier protein] (malonyl-ACP) is a key metabolite in the fatty acid pathway and donates two carbon units to the growing fatty acid chain during each step of biosynthesis. Attempts to test engineered fatty acid biosynthesis enzymes in vitro will require malonyl-ACP or malonyl-ACP analogs. Malonyl-ACP is challenging to prepare due to the instability of the carboxylate leaving group and the multiple steps of post-translational modification required to activate ACP. Here we report the expression and purification of holo- and malonyl-ACP from Escherichia coli with high yields (>15 mg per L of expression). The malonyl-ACP is efficiently recognized by the E. coli keto-acyl synthase enzyme, FabH. A FabH assay using malonyl-ACP and a coumarin-based fluorescent reagent is described that provides a high throughput alternative to reported radioactive assays. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Two-step emulsification process for water-in-oil-in-water multiple emulsions stabilized by lamellar liquid crystals.

    PubMed

    Ito, Toshifumi; Tsuji, Yukitaka; Aramaki, Kenji; Tonooka, Noriaki

    2012-01-01

    Multiple emulsions, also called complex emulsions or multiphase emulsions, include water-in-oil-in-water (W/O/W)-type and oil-in-water-in-oil (O/W/O)-type emulsions. W/O/W-type multiple emulsions, obtained by utilizing lamellar liquid crystal with a layer structure showing optical anisotropy at the periphery of emulsion droplets, are superior in stability to O/W/O-type emulsions. In this study, we investigated a two-step emulsification process for a W/O/W-type multiple emulsion utilizing liquid crystal emulsification. We found that a W/O/W-type multiple emulsion containing lamellar liquid crystal can be prepared by mixing a W/O-type emulsion (prepared by primary emulsification) with a lamellar liquid crystal obtained from poly(oxyethylene) stearyl ether, cetyl alcohol, and water, and by dispersing and emulsifying the mixture in an outer aqueous phase. When poly(oxyethylene) stearyl ether and cetyl alcohol are each used in a given amount and the amount of water added is varied from 0 to 15 g (total amount of emulsion, 100 g), a W/O/W-type multiple emulsion is efficiently prepared. When the W/O/W-type multiple emulsion was held in a thermostatic bath at 25°C, the droplet size distribution showed no change 0, 30, or 60 days after preparation. Moreover, the W/O/W-type multiple emulsion strongly encapsulated Uranine in the inner aqueous phase as compared with emulsions prepared by one-step emulsification.

  2. Laparoscopic Myomectomy for a Plethora of Submucous Myomas.

    PubMed

    Paul, P G; Paul, George; Radhika, K T; Bulusu, Saumya; Shintre, Hemant

    To demonstrate a laparoscopic myomectomy technique for the removal of multiple submucous myomas. A step-by-step demonstration of the surgical procedure (Canadian Task Force classification III-C). In cases of multiple submucous myomas, hysteroscopic resection of myomas might not be a viable option, especially in cases requiring fertility preservation. It may cause significant damage to the endometrial surface, leading to the formation of endometrial synechiae [1]. The procedure is technically challenging and requires prolonged operating time owing to impaired visibility and the need for repeated specimen removal. This can lead to complications, such as fluid overload and, rarely, air embolism [2]. Thus, laparoscopic myomectomy may be a better option in such cases [1]. A 30-year-old nulligravida presented with a 3-year history of heavy menstrual bleeding and dysmenorrhea. She had received no symptom relief with hormonal medications and magnetic resonance-guided focused ultrasound. On examination, she was anemic, and her uterus was enlarged to 16-weeks gravid size. Ultrasonography revealed an intramural fundal myoma of 6 × 4.2 cm and numerous submucous myomas of 1 to 3.2 cm. During hysteroscopy, multiple submucous myomas of varying sizes ranging from type 0 to type 1 were seen. On laparoscopy, an incision was made on the uterine fundus with an ultrasonic device after injecting vasopressin (20 U in 200 mL dilution), and the fundal myoma was enucleated. The incision was then extended to open the endometrial cavity for the removal of the submucous myomas. Most of the myomas were removed with mechanical force, along with the minimal use of ultrasonic energy. A total of 46 myomas were removed, and the myometrium was closed in 2 layers. The duration of the surgery was 210 minutes, and estimated blood loss was 850 mL. The patient did not require blood transfusion, but was advised to take hematinics. At a 6-month follow-up, the patient reported significant improvement in her symptoms. A repeat hysteroscopy revealed moderate synechiae in the midline and 2 small submucous myomas near the internal os. The synechiae were incised with hysteroscopic scissors, and the submucous myomas were resected with a bipolar resectoscope. The patient was advised to attempt conception after 2 months. Laparoscopic myomectomy is an alternative to hysteroscopic resection for multiple submucous myomas. A repeat hysteroscopy is useful for identifying any residual myomas and synechiae. Copyright © 2017 AAGL. Published by Elsevier Inc. All rights reserved.

  3. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  4. The first step in the development of text mining technology for cancer risk assessment: identifying and organizing scientific evidence in risk assessment literature

    PubMed Central

    Korhonen, Anna; Silins, Ilona; Sun, Lin; Stenius, Ulla

    2009-01-01

    Background One of the most neglected areas of biomedical Text Mining (TM) is the development of systems based on carefully assessed user needs. We have recently investigated the user needs of an important task yet to be tackled by TM -- Cancer Risk Assessment (CRA). Here we take the first step towards the development of TM technology for the task: identifying and organizing the scientific evidence required for CRA in a taxonomy which is capable of supporting extensive data gathering from biomedical literature. Results The taxonomy is based on expert annotation of 1297 abstracts downloaded from relevant PubMed journals. It classifies 1742 unique keywords found in the corpus to 48 classes which specify core evidence required for CRA. We report promising results with inter-annotator agreement tests and automatic classification of PubMed abstracts to taxonomy classes. A simple user test is also reported in a near real-world CRA scenario which demonstrates along with other evaluation that the resources we have built are well-defined, accurate, and applicable in practice. Conclusion We present our annotation guidelines and a tool which we have designed for expert annotation of PubMed abstracts. A corpus annotated for keywords and document relevance is also presented, along with the taxonomy which organizes the keywords into classes defining core evidence for CRA. As demonstrated by the evaluation, the materials we have constructed provide a good basis for classification of CRA literature along multiple dimensions. They can support current manual CRA as well as facilitate the development of an approach based on TM. We discuss extending the taxonomy further via manual and machine learning approaches and the subsequent steps required to develop TM technology for the needs of CRA. PMID:19772619

  5. AMTD: update of engineering specifications derived from science requirements for future UVOIR space telescopes

    NASA Astrophysics Data System (ADS)

    Stahl, H. Philip; Postman, Marc; Mosier, Gary; Smith, W. Scott; Blaurock, Carl; Ha, Kong; Stark, Christopher C.

    2014-08-01

    The Advance Mirror Technology Development (AMTD) project is in Phase 2 of a multiyear effort, initiated in FY12, to mature by at least a half TRL step six critical technologies required to enable 4 meter or larger UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND provide a high-performance low-cost low-risk system. To give the science community options, we are pursuing multiple technology paths. A key task is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints. A key finding of this effort is that the science requires an 8 meter or larger aperture telescope.

  6. AMTD: Update of Engineering Specifications Derived from Science Requirements for Future UVOIR Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Postman, Marc; Mosier, Gary; Smith, W. Scott; Blaurock, Carl; Ha, Kong; Stark, Christopher C.

    2014-01-01

    The Advance Mirror Technology Development (AMTD) project is in Phase 2 of a multiyear effort, initiated in FY12, to mature by at least a half TRL step six critical technologies required to enable 4 meter or larger UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND provide a high-performance low-cost low-risk system. To give the science community options, we are pursuing multiple technology paths. A key task is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints. A key finding of this effort is that the science requires an 8 meter or larger aperture telescope

  7. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  8. Fostering Autonomy through Syllabus Design: A Step-by-Step Guide for Success

    ERIC Educational Resources Information Center

    Ramírez Espinosa, Alexánder

    2016-01-01

    Promoting learner autonomy is relevant in the field of applied linguistics due to the multiple benefits it brings to the process of learning a new language. However, despite the vast array of research on how to foster autonomy in the language classroom, it is difficult to find step-by-step processes to design syllabi and curricula focused on the…

  9. Water requirements of the aluminum industry

    USGS Publications Warehouse

    Conklin, Howard L.

    1956-01-01

    Aluminum is unique among metals in the way it is obtained from its ore. The first step is to produce alumina, a white powder that bears no resemblance to the bauxite from which it is derived or to the metallic aluminum to which it is reduced by electrolytic action in a second step. Each step requires a complete plant facility, and the plants may be adjacent or separated by as much as the width of the North American continent. Field investigations sf every alumina plant and reduction works in the United States were undertaken to determine the industry's water use. Detailed studies were made of process and plant layout so that a water balance could be made for each plant to determine not only the gross water intake but also an approximation of the consumptive use of water. Water requirements of alumina plants range from 0.28 to 1.10 gallons per pound of alumina; the average for the industry is 0.66 gallon. Water requirements of reduction works vary considerably more, ranging from 1.24 to 36.33 gallons per pound of aluminum, and average 14.62 gallons. All alumina plants in the United States derive alumina from bauxite by the Bayer process or by the Combination process, a modification of the Bayer process. Although the chemical process for obtaining alumina from bauxite is essentially the same at all plants, different procedures are employed to cool the sodium aluminate solution before it enters the precipitating tanks and to concentrate it by evaporation of some of the water in the solution. Where this evaporation takes place in a cooling tower, water in the solution is lost to the atmosphere as water vapor and so is used consumptively. In other plants, the quantity of solution in the system is controlled by evaporation in a multiple-effect evaporator where practically all vapor distilled out of the solution is condensed to water that may be reused. The latter method is used in all recently constructed alumina plants, and some older plants are replacing cooling towers with multiple-effect evaporators. All reduction works in the United States use the Hall process, but the variation in water requirements is even greater than the variation at alumina plants, and, further, the total daily water requirement for all reduction works is more than 9 times the total daily requirement of all alumina plants. Many reduction works use gas scrubbers, but some do not. As gas scrubbing is one of the principal water uses in reduction works, the manner in which wash water is used, cooled, and reused accounts in large measure for the variation in water requirements. Although the supply of water for all plants but one was reported by the management to be ample for all plant needs, the economic factor of the cost of water differs considerably among plants. It is this factor that accounts in large measure for the widely divergent slant practices. Plant capacity alone has so little effect on plant water requirements that other conditions such as plant operation based on the cost of water, plant location, and the need for conservation of water mask any economy inherent in plant size.

  10. User error with Diskus and Turbuhaler by asthma patients and pharmacists in Jordan and Australia.

    PubMed

    Basheti, Iman A; Qunaibi, Eyad; Bosnic-Anticevich, Sinthia Z; Armour, Carol L; Khater, Samar; Omar, Muthana; Reddel, Helen K

    2011-12-01

    Use of inhalers requires accurate completion of multiple steps to ensure effective medication delivery. To evaluate the most problematic steps in the use of Diskus and Turbuhaler for pharmacists and patients in Jordon and Australia. With standardized inhaler-technique checklists, we asked community pharmacists to demonstrate the use of Diskus and Turbuhaler. We asked patients with asthma to demonstrate the inhaler (Diskus or Turbuhaler) they were currently using. Forty-two community pharmacists in Jordan, and 31 in Australia, participated. In Jordan, 51 asthma patients demonstrated use of Diskus, and 40 demonstrated use of Turbuhaler. In Australia, 53 asthma patients demonstrated use of Diskus, and 42 demonstrated use of Turbuhaler. The pharmacists in Australia had received inhaler-technique education more recently than those in Jordan (P = .03). With Diskus, few pharmacists in either country demonstrated correct technique for step 3 (exhale to residual volume) or step 4 (exhale away from the device), although there were somewhat fewer errors in Australia than Jordan (16% vs 0% in step 3, P = .007, and 20% vs 0% in step 4, P = .003 via chi-square test). With Turbuhaler there were significant differences between the pharmacists from Australia and Jordan, mainly in step 2 (hold the device upright while loading, 45% vs 2% correct, P < .001). Few of the patients had received inhaler-technique education in the previous year. The patients made errors similar to those of the pharmacists in individual steps with Diskus and Turbuhaler. The essential steps with Diskus were performed correctly more often by the Jordanian patients, and with Turbuhaler by the Australian patients. Despite differences in Jordan's and Australia's health systems, pharmacists from both Australia and Jordan had difficulty with the same Diskus and Turbuhaler steps. In both countries, the errors made by the asthma patients were similar to those made by the pharmacists.

  11. Electronic measurement apparatus movable in a cased borehole and compensating for casing resistance differences

    DOEpatents

    Vail, W.B. III.

    1991-12-24

    Methods of operation are described for an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well. 6 figures.

  12. Electronic measurement apparatus movable in a cased borehole and compensating for casing resistance differences

    DOEpatents

    Vail, III, William B.

    1991-01-01

    Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well.

  13. Rapid fabrication of microneedles using magnetorheological drawing lithography.

    PubMed

    Chen, Zhipeng; Ren, Lei; Li, Jiyu; Yao, Lebin; Chen, Yan; Liu, Bin; Jiang, Lelun

    2018-01-01

    Microneedles are micron-sized needles that are widely applied in biomedical fields owing to their painless, minimally invasive, and convenient operation. However, most microneedle fabrication approaches are costly, time consuming, involve multiple steps, and require expensive equipment. In this study, we present a novel magnetorheological drawing lithography (MRDL) method to efficiently fabricate microneedle, bio-inspired microneedle, and molding-free microneedle array. With the assistance of an external magnetic field, the 3D structure of a microneedle can be directly drawn from a droplet of curable magnetorheological fluid. The formation process of a microneedle consists of two key stages, elasto-capillary self-thinning and magneto-capillary self-shrinking, which greatly affect the microneedle height and tip radius. Penetration and fracture tests demonstrated that the microneedle had sufficient strength and toughness for skin penetration. Microneedle arrays and a bio-inspired microneedle were also fabricated, which further demonstrated the versatility and flexibility of the MRDL method. Microneedles have been widely applied in biomedical fields owing to their painless, minimally invasive, and convenient operation. However, most microneedle fabrication approaches are costly, time consuming, involve multiple steps, and require expensive equipment. Furthermore, most researchers have focused on the biomedical applications of microneedles but have given little attention to the optimization of the fabrication process. This research presents a novel magnetorheological drawing lithography (MRDL) method to fabricate microneedle, bio-inspired microneedle, and molding-free microneedle array. In this proposed technique, a droplet of curable magnetorheological fluid (CMRF) is drawn directly from almost any substrate to produce a 3D microneedle under an external magnetic field. This method not only inherits the advantages of thermal drawing approach without the need for a mask and light irradiation but also eliminates the requirement for drawing temperature adjustment. The MRDL method is extremely simple and can even produce the complex and multiscale structure of bio-inspired microneedle. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  14. Kinetic characterization of the critical step in HIV-1 protease maturation.

    PubMed

    Sadiq, S Kashif; Noé, Frank; De Fabritiis, Gianni

    2012-12-11

    HIV maturation requires multiple cleavage of long polyprotein chains into functional proteins that include the viral protease itself. Initial cleavage by the protease dimer occurs from within these precursors, and yet only a single protease monomer is embedded in each polyprotein chain. Self-activation has been proposed to start from a partially dimerized protease formed from monomers of different chains binding its own N termini by self-association to the active site, but a complete structural understanding of this critical step in HIV maturation is missing. Here, we captured the critical self-association of immature HIV-1 protease to its extended amino-terminal recognition motif using large-scale molecular dynamics simulations, thus confirming the postulated intramolecular mechanism in atomic detail. We show that self-association to a catalytically viable state requires structural cooperativity of the flexible β-hairpin "flap" regions of the enzyme and that the major transition pathway is first via self-association in the semiopen/open enzyme states, followed by enzyme conformational transition into a catalytically viable closed state. Furthermore, partial N-terminal threading can play a role in self-association, whereas wide opening of the flaps in concert with self-association is not observed. We estimate the association rate constant (k(on)) to be on the order of ∼1 × 10(4) s(-1), suggesting that N-terminal self-association is not the rate-limiting step in the process. The shown mechanism also provides an interesting example of molecular conformational transitions along the association pathway.

  15. Development of DKB ETL module in case of data conversion

    NASA Astrophysics Data System (ADS)

    Kaida, A. Y.; Golosova, M. V.; Grigorieva, M. A.; Gubin, M. Y.

    2018-05-01

    Modern scientific experiments involve the producing of huge volumes of data that requires new approaches in data processing and storage. These data themselves, as well as their processing and storage, are accompanied by a valuable amount of additional information, called metadata, distributed over multiple informational systems and repositories, and having a complicated, heterogeneous structure. Gathering these metadata for experiments in the field of high energy nuclear physics (HENP) is a complex issue, requiring the quest for solutions outside the box. One of the tasks is to integrate metadata from different repositories into some kind of a central storage. During the integration process, metadata taken from original source repositories go through several processing steps: metadata aggregation, transformation according to the current data model and loading it to the general storage in a standardized form. The R&D project of ATLAS experiment on LHC, Data Knowledge Base, is aimed to provide fast and easy access to significant information about LHC experiments for the scientific community. The data integration subsystem, being developed for the DKB project, can be represented as a number of particular pipelines, arranging data flow from data sources to the main DKB storage. The data transformation process, represented by a single pipeline, can be considered as a number of successive data transformation steps, where each step is implemented as an individual program module. This article outlines the specifics of program modules, used in the dataflow, and describes one of the modules developed and integrated into the data integration subsystem of DKB.

  16. A basket two-part model to analyze medical expenditure on interdependent multiple sectors.

    PubMed

    Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji

    2018-05-01

    This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.

  17. Facile and rapid DNA extraction and purification from food matrices using IFAST (immiscible filtration assisted by surface tension).

    PubMed

    Strotman, Lindsay N; Lin, Guangyun; Berry, Scott M; Johnson, Eric A; Beebe, David J

    2012-09-07

    Extraction and purification of DNA is a prerequisite to detection and analytical techniques. While DNA sample preparation methods have improved over the last few decades, current methods are still time consuming and labor intensive. Here we demonstrate a technology termed IFAST (Immiscible Filtration Assisted by Surface Tension), that relies on immiscible phase filtration to reduce the time and effort required to purify DNA. IFAST replaces the multiple wash and centrifugation steps required by traditional DNA sample preparation methods with a single step. To operate, DNA from lysed cells is bound to paramagnetic particles (PMPs) and drawn through an immiscible fluid phase barrier (i.e. oil) by an external handheld magnet. Purified DNA is then eluted from the PMPs. Here, detection of Clostridium botulinum type A (BoNT/A) in food matrices (milk, orange juice), a bioterrorism concern, was used as a model system to establish IFAST's utility in detection assays. Data validated that the DNA purified by IFAST was functional as a qPCR template to amplify the bont/A gene. The sensitivity limit of IFAST was comparable to the commercially available Invitrogen ChargeSwitch® method. Notably, pathogen detection via IFAST required only 8.5 μL of sample and was accomplished in five-fold less time. The simplicity, rapidity and portability of IFAST offer significant advantages when compared to existing DNA sample preparation methods.

  18. Speckle evolution with multiple steps of least-squares phase removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Mingzhou; Dainty, Chris; Roux, Filippus S.

    2011-08-15

    We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.

  19. Capacity for visual features in mental rotation

    PubMed Central

    Xu, Yangqing; Franconeri, Steven L.

    2015-01-01

    Although mental rotation is a core component of scientific reasoning, we still know little about its underlying mechanism. For instance - how much visual information can we rotate at once? Participants rotated a simple multi-part shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low – only one feature could remain attached to one part. Behavioral and eyetracking data showed that this single feature remained ‘glued’ via a singular focus of attention, typically on the object’s top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of the capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science education contexts. PMID:26174781

  20. Evolution of natural history information in the 21st century – developing an integrated framework for biological and geographical data

    USGS Publications Warehouse

    Reusser, Deborah A.; Lee, Henry

    2011-01-01

    Threats to marine and estuarine species operate over many spatial scales, from nutrient enrichment at the watershed/estuarine scale to invasive species and climate change at regional and global scales. To help address research questions across these scales, we provide here a standardized framework for a biogeographical information system containing queriable biological data that allows extraction of information on multiple species, across a variety of spatial scales based on species distributions, natural history attributes and habitat requirements. As scientists shift from research on localized impacts on individual species to regional and global scale threats, macroecological approaches of studying multiple species over broad geographical areas are becoming increasingly important. The standardized framework described here for capturing and integrating biological and geographical data is a critical first step towards addressing these macroecological questions and we urge organizations capturing biogeoinformatics data to consider adopting this framework.

  1. Capacity for Visual Features in Mental Rotation.

    PubMed

    Xu, Yangqing; Franconeri, Steven L

    2015-08-01

    Although mental rotation is a core component of scientific reasoning, little is known about its underlying mechanisms. For instance, how much visual information can someone rotate at once? We asked participants to rotate a simple multipart shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low: Only one feature could remain attached to one part. Behavioral and eye-tracking data showed that this single feature remained "glued" via a singular focus of attention, typically on the object's top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science-education contexts. © The Author(s) 2015.

  2. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  3. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  4. Methods for understanding super-efficient data envelopment analysis results with an application to hospital inpatient surgery.

    PubMed

    O'Neill, Liam; Dexter, Franklin

    2005-11-01

    We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.

  5. Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Hasegawa, Shin-ya; Hirata, Ryo

    2018-04-01

    The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.

  6. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks.

    PubMed

    Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren

    2018-04-16

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.

  7. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  8. Analysis of bacterial migration. 2: Studies with multiple attractant gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, I.; Frymier, P.D.; Hahn, C.M.

    1995-02-01

    Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less

  9. Linezolid desensitization for a patient with multiple medication hypersensitivity reactions.

    PubMed

    Bagwell, Autumn D; Stollings, Joanna L; White, Katie D; Fadugba, Olajumoke O; Choi, Jane J

    2013-01-01

    To describe a case in which a linezolid desensitization protocol was successfully used for a polymicrobial surgical wound infection in a patient with multiple drug hypersensitivity reactions. A 24-year-old woman with vocal cord dysfunction requiring tracheostomy was admitted for a surgical wound infection following a tracheostomy fistula closure procedure. The patient reported multiple antibiotic allergies including penicillins (rash), sulfonamides (rash), vancomycin (anaphylaxis), azithromycin (rash), cephalosporins (anaphylaxis), levofloxacin (unspecified), clindamycin (unspecified), and carbapenems (unspecified). Gram stain of the purulent wound drainage demonstrated mixed gram-negative and gram-positive flora, and bacterial cultures were overgrown with Proteus mirabilis, which precluded identification of other pathogens. Following failed test doses of linezolid, tigecycline, and daptomycin, all of which resulted in hypersensitivity reactions, a 16-step linezolid desensitization protocol was developed and successfully implemented without adverse reactions. The patient completed a 2-week course of antibiotic therapy that included linezolid upon finishing the desensitization protocol. Linezolid is useful in treating complicated and uncomplicated skin and soft tissue infections caused by gram-positive bacteria. With precautions, including premedication, a monitored nursing unit, and immediate availability of an emergency anaphylaxis kit, drug desensitization allows patients the ability to safely use medications to which they may have an immediate hypersensitivity reaction. Minimal data exist on linezolid desensitization protocols. Linezolid desensitization can be a viable option in patients requiring antimicrobial therapy for complicated gram-positive skin infections.

  10. Computational strategies for alternative single-step Bayesian regression models with large numbers of genotyped and non-genotyped animals.

    PubMed

    Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J

    2016-12-08

    Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.

  11. Minimal gain marching schemes: searching for unstable steady-states with unsteady solvers

    NASA Astrophysics Data System (ADS)

    de S. Teixeira, Renan; S. de B. Alves, Leonardo

    2017-12-01

    Reference solutions are important in several applications. They are used as base states in linear stability analyses as well as initial conditions and reference states for sponge zones in numerical simulations, just to name a few examples. Their accuracy is also paramount in both fields, leading to more reliable analyses and efficient simulations, respectively. Hence, steady-states usually make the best reference solutions. Unfortunately, standard marching schemes utilized for accurate unsteady simulations almost never reach steady-states of unstable flows. Steady governing equations could be solved instead, by employing Newton-type methods often coupled with continuation techniques. However, such iterative approaches do require large computational resources and very good initial guesses to converge. These difficulties motivated the development of a technique known as selective frequency damping (SFD) (Åkervik et al. in Phys Fluids 18(6):068102, 2006). It adds a source term to the unsteady governing equations that filters out the unstable frequencies, allowing a steady-state to be reached. This approach does not require a good initial condition and works well for self-excited flows, where a single nonzero excitation frequency is selected by either absolute or global instability mechanisms. On the other hand, it seems unable to damp stationary disturbances. Furthermore, flows with a broad unstable frequency spectrum might require the use of multiple filters, which delays convergence significantly. Both scenarios appear in convectively, absolutely or globally unstable flows. An alternative approach is proposed in the present paper. It modifies the coefficients of a marching scheme in such a way that makes the absolute value of its linear gain smaller than one within the required unstable frequency spectra, allowing the respective disturbance amplitudes to decay given enough time. These ideas are applied here to implicit multi-step schemes. A few chosen test cases shows that they enable convergence toward solutions that are unstable to stationary and oscillatory disturbances, with either a single or multiple frequency content. Finally, comparisons with SFD are also performed, showing significant reduction in computer cost for complex flows by using the implicit multi-step MGM schemes.

  12. Design of Revolute Joints for In-Mold Assembly Using Insert Molding.

    PubMed

    Ananthanarayanan, Arvind; Ehrlich, Leicester; Desai, Jaydev P; Gupta, Satyandra K

    2011-12-01

    Creating highly articulated miniature structures requires assembling a large number of small parts. This is a very challenging task and increases cost of mechanical assemblies. Insert molding presents the possibility of creating a highly articulated structure in a single molding step. This can be accomplished by placing multiple metallic bearings in the mold and injecting plastic on top of them. In theory, this idea can generate a multi degree of freedom structures in just one processing step without requiring any post molding assembly operations. However, the polymer material has a tendency to shrink on top of the metal bearings and hence jam the joints. Hence, until now insert molding has not been used to create articulated structures. This paper presents a theoretical model for estimating the extent of joint jamming that occurs due to the shrinkage of the polymer on top of the metal bearings. The level of joint jamming is seen as the effective torque needed to overcome the friction in the revolute joints formed by insert molding. We then use this model to select the optimum design parameters which can be used to fabricate functional, highly articulating assemblies while meeting manufacturing constraints. Our analysis shows that the strength of weld-lines formed during the in-mold assembly process play a significant role in determining the minimum joint dimensions necessary for fabricating functional revolute joints. We have used the models and methods described in this paper to successfully fabricate the structure for a minimally invasive medical robot prototype with potential applications in neurosurgery. To the best of our knowledge, this is the first demonstration of building an articulated structure with multiple degrees of freedom using insert molding.

  13. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR.

    PubMed

    Stokdyk, Joel P; Firnstahl, Aaron D; Spencer, Susan K; Burch, Tucker R; Borchardt, Mark A

    2016-06-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L(-1) assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation. Published by Elsevier Ltd.

  14. Lowering the Barrier for Standards-Compliant and Discoverable Hydrological Data Publication

    NASA Astrophysics Data System (ADS)

    Kadlec, J.

    2013-12-01

    The growing need for sharing and integration of hydrological and climate data across multiple organizations has resulted in the development of distributed, services-based, standards-compliant hydrological data management and data hosting systems. The problem with these systems is complicated set-up and deployment. Many existing systems assume that the data publisher has remote-desktop access to a locally managed server and experience with computer network setup. For corporate websites, shared web hosting services with limited root access provide an inexpensive, dynamic web presence solution using the Linux, Apache, MySQL and PHP (LAMP) software stack. In this paper, we hypothesize that a webhosting service provides an optimal, low-cost solution for hydrological data hosting. We propose a software architecture of a standards-compliant, lightweight and easy-to-deploy hydrological data management system that can be deployed on the majority of existing shared internet webhosting services. The architecture and design is validated by developing Hydroserver Lite: a PHP and MySQL-based hydrological data hosting package that is fully standards-compliant and compatible with the Consortium of Universities for Advancement of Hydrologic Sciences (CUAHSI) hydrologic information system. It is already being used for management of field data collection by students of the McCall Outdoor Science School in Idaho. For testing, the Hydroserver Lite software has been installed on multiple different free and low-cost webhosting sites including Godaddy, Bluehost and 000webhost. The number of steps required to set-up the server is compared with the number of steps required to set-up other standards-compliant hydrologic data hosting systems including THREDDS, IstSOS and MapServer SOS.

  15. Preventing mental illness: closing the evidence-practice gap through workforce and services planning.

    PubMed

    Furber, Gareth; Segal, Leonie; Leach, Matthew; Turnbull, Catherine; Procter, Nicholas; Diamond, Mark; Miller, Stephanie; McGorry, Patrick

    2015-07-24

    Mental illness is prevalent across the globe and affects multiple aspects of life. Despite advances in treatment, there is little evidence that prevalence rates of mental illness are falling. While the prevention of cardiovascular disease and cancers are common in the policy dialogue and in service delivery, the prevention of mental illness remains a neglected area. There is accumulating evidence that mental illness is at least partially preventable, with increasing recognition that its antecedents are often found in infancy, childhood, adolescence and youth, creating multiple opportunities into young adulthood for prevention. Developing valid and reproducible methods for translating the evidence base in mental illness prevention into actionable policy recommendations is a crucial step in taking the prevention agenda forward. Building on an aetiological model of adult mental illness that emphasizes the importance of intervening during infancy, childhood, adolescence and youth, we adapted a workforce and service planning framework, originally applied to diabetes care, to the analysis of the workforce and service structures required for best-practice prevention of mental illness. The resulting framework consists of 6 steps that include identifying priority risk factors, profiling the population in terms of these risk factors to identify at-risk groups, matching these at-risk groups to best-practice interventions, translation of these interventions to competencies, translation of competencies to workforce and service estimates, and finally, exploring the policy implications of these workforce and services estimates. The framework outlines the specific tasks involved in translating the evidence-base in prevention, to clearly actionable workforce, service delivery and funding recommendations. The framework describes the means to deliver mental illness prevention that the literature indicates is achievable, and is the basis of an ongoing project to model the workforce and service structures required for mental illness prevention.

  16. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USGS Publications Warehouse

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  17. Partial oxidation of step-bound water leads to anomalous pH effects on metal electrode step-edges

    DOE PAGES

    Schwarz, Kathleen; Xu, Bingjun; Yan, Yushan; ...

    2016-05-26

    The design of better heterogeneous catalysts for applications such as fuel cells and electrolyzers requires a mechanistic understanding of electrocatalytic reactions and the dependence of their activity on operating conditions such as pH. A satisfactory explanation for the unexpected pH dependence of electrochemical properties of platinum surfaces has so far remained elusive, with previous explanations resorting to complex co-adsorption of multiple species and resulting in limited predictive power. This knowledge gap suggests that the fundamental properties of these catalysts are not yet understood, limiting systematic improvement. In this paper, we analyze the change in charge and free energies upon adsorptionmore » using density-functional theory (DFT) to establish that water adsorbs on platinum step edges across a wide voltage range, including the double-layer region, with a loss of approximately 0.2 electrons upon adsorption. We show how this as-yet unreported change in net surface charge due to this water explains the anomalous pH variations of the hydrogen underpotential deposition (H upd) and the potentials of zero total charge (PZTC) observed in published experimental data. This partial oxidation of water is not limited to platinum metal step edges, and we report the charge of the water on metal step edges of commonly used catalytic metals, including copper, silver, iridium, and palladium, illustrating that this partial oxidation of water broadly influences the reactivity of metal electrodes.« less

  18. Kinetic mechanism of human DNA ligase I reveals magnesium-dependent changes in the rate-limiting step that compromise ligation efficiency.

    PubMed

    Taylor, Mark R; Conrad, John A; Wahl, Daniel; O'Brien, Patrick J

    2011-07-01

    DNA ligase I (LIG1) catalyzes the ligation of single-strand breaks to complete DNA replication and repair. The energy of ATP is used to form a new phosphodiester bond in DNA via a reaction mechanism that involves three distinct chemical steps: enzyme adenylylation, adenylyl transfer to DNA, and nick sealing. We used steady state and pre-steady state kinetics to characterize the minimal mechanism for DNA ligation catalyzed by human LIG1. The ATP dependence of the reaction indicates that LIG1 requires multiple Mg(2+) ions for catalysis and that an essential Mg(2+) ion binds more tightly to ATP than to the enzyme. Further dissection of the magnesium ion dependence of individual reaction steps revealed that the affinity for Mg(2+) changes along the reaction coordinate. At saturating concentrations of ATP and Mg(2+) ions, the three chemical steps occur at similar rates, and the efficiency of ligation is high. However, under conditions of limiting Mg(2+), the nick-sealing step becomes rate-limiting, and the adenylylated DNA intermediate is prematurely released into solution. Subsequent adenylylation of enzyme prevents rebinding to the adenylylated DNA intermediate comprising an Achilles' heel of LIG1. These ligase-generated 5'-adenylylated nicks constitute persistent breaks that are a threat to genomic stability if they are not repaired. The kinetic and thermodynamic framework that we have determined for LIG1 provides a starting point for understanding the mechanism and specificity of mammalian DNA ligases.

  19. Effective learning strategies for real-time image-guided adaptive control of multiple-source hyperthermia applicators.

    PubMed

    Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva

    2010-03-01

    This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.

  20. Executive Functions Underlying Multiplicative Reasoning: Problem Type Matters

    ERIC Educational Resources Information Center

    Agostino, Alba; Johnson, Janice; Pascual-Leone, Juan

    2010-01-01

    We investigated the extent to which inhibition, updating, shifting, and mental-attentional capacity ("M"-capacity) contribute to children's ability to solve multiplication word problems. A total of 155 children in Grades 3-6 (8- to 13-year-olds) completed a set of multiplication word problems at two levels of difficulty: one-step and multiple-step…

  1. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  2. Two-step relaxation mode analysis with multiple evolution times applied to all-atom molecular dynamics protein simulation.

    PubMed

    Karasawa, N; Mitsutake, A; Takano, H

    2017-12-01

    Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n]polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μs molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.

  3. Two-step relaxation mode analysis with multiple evolution times applied to all-atom molecular dynamics protein simulation

    NASA Astrophysics Data System (ADS)

    Karasawa, N.; Mitsutake, A.; Takano, H.

    2017-12-01

    Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.

  4. Energy harvesting using TEG and PV cell for low power application

    NASA Astrophysics Data System (ADS)

    Tawil, Siti Nooraya Mohd; Zainal, Mohd Zulkarnain

    2018-02-01

    A thermoelectric generator (TEG) module and photovoltaic cell (PV) were utilized to harvest energy from temperature gradients of heat sources from ambient heat and light of sun. The output of TEG and PV were connected to a power management circuit consist of step-up dc-dc converter in order to increase the output voltage to supply a low power application such as wireless communication module and the photovoltaic cell for charging an energy storage element in order to switch on a fan for cooling system of the thermoelectric generator. A switch is used as a selector to choose the input of source either from photovoltaic cell or thermoelectric generator to switch on DC-DC step-up converter. In order to turn on the DC-DC step-up converter, the input must be greater than 3V. The energy harvesting was designed so that it can be used continuously and portable anywhere. Multiple sources used in this energy harvesting system is to ensure the system can work in whatever condition either in good weather or not good condition of weather. This energy harvesting system has the potential to be used in military operation and environment that require sustainability of energy resources.

  5. Global magnetohydrodynamic simulations on multiple GPUs

    NASA Astrophysics Data System (ADS)

    Wong, Un-Hong; Wong, Hon-Cheng; Ma, Yonghui

    2014-01-01

    Global magnetohydrodynamic (MHD) models play the major role in investigating the solar wind-magnetosphere interaction. However, the huge computation requirement in global MHD simulations is also the main problem that needs to be solved. With the recent development of modern graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA), it is possible to perform global MHD simulations in a more efficient manner. In this paper, we present a global magnetohydrodynamic (MHD) simulator on multiple GPUs using CUDA 4.0 with GPUDirect 2.0. Our implementation is based on the modified leapfrog scheme, which is a combination of the leapfrog scheme and the two-step Lax-Wendroff scheme. GPUDirect 2.0 is used in our implementation to drive multiple GPUs. All data transferring and kernel processing are managed with CUDA 4.0 API instead of using MPI or OpenMP. Performance measurements are made on a multi-GPU system with eight NVIDIA Tesla M2050 (Fermi architecture) graphics cards. These measurements show that our multi-GPU implementation achieves a peak performance of 97.36 GFLOPS in double precision.

  6. Requirements for rapid plasmid ColE1 copy number adjustments: a mathematical model of inhibition modes and RNA turnover rates.

    PubMed

    Paulsson, J; Nordström, K; Ehrenberg, M

    1998-01-01

    The random distribution of ColE1 plasmids between the daughter cells at cell division introduces large copy number variations. Statistic variation associated with limited copy number in single cells also causes fluctuations to emerge spontaneously during the cell cycle. Efficient replication control out of steady state is therefore important to tame such stochastic effects of small numbers. In the present model, the dynamic features of copy number control are divided into two parts: first, how sharply the replication frequency per plasmid responds to changes in the concentration of the plasmid-coded inhibitor, RNA I, and second, how tightly RNA I and plasmid concentrations are coupled. Single (hyperbolic)- and multiple (exponential)-step inhibition mechanisms are compared out of steady state and it is shown how the response in replication frequency depends on the mode of inhibition. For both mechanisms, sensitivity of inhibition is "bought" at the expense of a rapid turnover of a replication preprimer, RNA II. Conventional, single-step, inhibition kinetics gives a sloppy replication control even at high RNA II turnover rates, whereas multiple-step inhibition has the potential of working with unlimited precision. When plasmid concentration changes rapidly, RNA I must be degraded rapidly to be "up to date" with the change. Adjustment to steady state is drastically impaired when the turnover rate constants of RNA I decrease below certain thresholds, but is basically unaffected for a corresponding increase. Several features of copy number control that are shown to be crucial for the understanding of ColE1-type plasmids still remain to be experimentally characterized. It is shown how steady-state properties reflect dynamics at the heart of regulation and therefore can be used to discriminate between fundamentally different copy number control mechanisms. The experimental tests of the predictions made require carefully planned assays, and some suggestions for suitable experiments arise naturally from the present work. It is also discussed how the presence of the Rom protein may affect dynamic qualities of copy number control. Copyright 1998 Academic Press.

  7. To repair or not to repair: with FAVOR there is no question

    NASA Astrophysics Data System (ADS)

    Garetto, Anthony; Schulz, Kristian; Tabbone, Gilles; Himmelhaus, Michael; Scheruebl, Thomas

    2016-10-01

    In the mask shop the challenges associated with today's advanced technology nodes, both technical and economic, are becoming increasingly difficult. The constant drive to continue shrinking features means more masks per device, smaller manufacturing tolerances and more complexity along the manufacturing line with respect to the number of manufacturing steps required. Furthermore, the extremely competitive nature of the industry makes it critical for mask shops to optimize asset utilization and processes in order to maximize their competitive advantage and, in the end, profitability. Full maximization of profitability in such a complex and technologically sophisticated environment simply cannot be achieved without the use of smart automation. Smart automation allows productivity to be maximized through better asset utilization and process optimization. Reliability is improved through the minimization of manual interactions leading to fewer human error contributions and a more efficient manufacturing line. In addition to these improvements in productivity and reliability, extra value can be added through the collection and cross-verification of data from multiple sources which provides more information about our products and processes. When it comes to handling mask defects, for instance, the process consists largely of time consuming manual interactions that are error prone and often require quick decisions from operators and engineers who are under pressure. The handling of defects itself is a multiple step process consisting of several iterations of inspection, disposition, repair, review and cleaning steps. Smaller manufacturing tolerances and features with higher complexity contribute to a higher number of defects which must be handled as well as a higher level of complexity. In this paper the recent efforts undertaken by ZEISS to provide solutions which address these challenges, particularly those associated with defectivity, will be presented. From automation of aerial image analysis to the use of data driven decision making to predict and propose the optimized back end of line process flow, productivity and reliability improvements are targeted by smart automation. Additionally the generation of the ideal aerial image from the design and several repair enhancement features offer additional capabilities to improve the efficiency and yield associated with defect handling.

  8. Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.

    2014-12-01

    Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available for further slip and for subsequent earthquakes. This suite of models reveals that efficiency may be a useful tool for determining the relative seismic hazard of different segmented fault systems, while accounting for coseismic damage zone production is critical in assessing fault interactions and the associated energy budgets of specific systems.

  9. Multiple Cylinder Free-Piston Stirling Machinery

    NASA Astrophysics Data System (ADS)

    Berchowitz, David M.; Kwon, Yong-Rak

    In order to improve the specific power of piston-cylinder type machinery, there is a point in capacity or power where an advantage accrues with increasing number of piston-cylinder assemblies. In the case of Stirling machinery where primary energy is transferred across the casing wall of the machine, this consideration is even more important. This is due primarily to the difference in scaling of basic power and the required heat transfer. Heat transfer is found to be progressively limited as the size of the machine increases. Multiple cylinder machines tend to preserve the surface area to volume ratio at more favorable levels. In addition, the spring effect of the working gas in the so-called alpha configuration is often sufficient to provide a high frequency resonance point that improves the specific power. There are a number of possible multiple cylinder configurations. The simplest is an opposed pair of piston-displacer machines (beta configuration). A three-cylinder machine requires stepped pistons to obtain proper volume phase relationships. Four to six cylinder configurations are also possible. A small demonstrator inline four cylinder alpha machine has been built to demonstrate both cooling operation and power generation. Data from this machine verifies theoretical expectations and is used to extrapolate the performance of future machines. Vibration levels are discussed and it is argued that some multiple cylinder machines have no linear component to the casing vibration but may have a nutating couple. Example applications are discussed ranging from general purpose coolers, computer cooling, exhaust heat power extraction and some high power engines.

  10. MIMO nonlinear ultrasonic tomography by propagation and backpropagation method.

    PubMed

    Dong, Chengdong; Jin, Yuanwei

    2013-03-01

    This paper develops a fast ultrasonic tomographic imaging method in a multiple-input multiple-output (MIMO) configuration using the propagation and backpropagation (PBP) method. By this method, ultrasonic excitation signals from multiple sources are transmitted simultaneously to probe the objects immersed in the medium. The scattering signals are recorded by multiple receivers. Utilizing the nonlinear ultrasonic wave propagation equation and the received time domain scattered signals, the objects are to be reconstructed iteratively in three steps. First, the propagation step calculates the predicted acoustic potential data at the receivers using an initial guess. Second, the difference signal between the predicted value and the measured data is calculated. Third, the backpropagation step computes updated acoustical potential data by backpropagating the difference signal to the same medium computationally. Unlike the conventional PBP method for tomographic imaging where each source takes turns to excite the acoustical field until all the sources are used, the developed MIMO-PBP method achieves faster image reconstruction by utilizing multiple source simultaneous excitation. Furthermore, we develop an orthogonal waveform signaling method using a waveform delay scheme to reduce the impact of speckle patterns in the reconstructed images. By numerical experiments we demonstrate that the proposed MIMO-PBP tomographic imaging method results in faster convergence and achieves superior imaging quality.

  11. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  12. Multiplication Fact Fluency Using Doubles

    ERIC Educational Resources Information Center

    Flowers, Judith M.; Rubenstein, Rheta N.

    2010-01-01

    Not knowing multiplication facts creates a gap in a student's mathematics development and undermines confidence and disposition toward further mathematical learning. Learning multiplication facts is a first step in proportional reasoning, "the capstone of elementary arithmetic and the gateway to higher mathematics" (NRC 2001, p. 242). Proportional…

  13. The Fanconi anemia pathway promotes replication-dependent DNA interstrand crosslink repair

    PubMed Central

    Knipscheer, Puck; Räschle, Markus; Smogorzewska, Agata; Enoiu, Milica; Ho, The Vinh; Schärer, Orlando D.; Elledge, Stephen J.; Walter, Johannes C.

    2010-01-01

    Fanconi anemia is a human cancer predisposition syndrome caused by mutations in thirteen Fanc genes. The disorder is characterized by genomic instability and cellular hypersensitivity to chemicals that generate DNA interstrand crosslinks (ICLs). A central event in the activation of the Fanconi anemia pathway is the mono-ubiquitylation of the FANCI-FANCD2 complex, but how this complex confers ICL resistance remains enigmatic. We make use of a cell-free system to show that the FANCI-FANCD2 complex is required for replication-dependent ICL repair. Removal of FANCD2 from extracts inhibits nucleolytic incisions near the ICL as well as translesion DNA synthesis past the lesion. Reversal of these defects requires ubiquitylated FANCI-FANCD2. Our results show that multiple steps of the essential S phase ICL repair mechanism fail when the Fanconi anemia pathway is compromised. PMID:19965384

  14. The Fanconi anemia pathway promotes replication-dependent DNA interstrand cross-link repair.

    PubMed

    Knipscheer, Puck; Räschle, Markus; Smogorzewska, Agata; Enoiu, Milica; Ho, The Vinh; Schärer, Orlando D; Elledge, Stephen J; Walter, Johannes C

    2009-12-18

    Fanconi anemia is a human cancer predisposition syndrome caused by mutations in 13 Fanc genes. The disorder is characterized by genomic instability and cellular hypersensitivity to chemicals that generate DNA interstrand cross-links (ICLs). A central event in the activation of the Fanconi anemia pathway is the mono-ubiquitylation of the FANCI-FANCD2 complex, but how this complex confers ICL resistance remains enigmatic. Using a cell-free system, we showed that FANCI-FANCD2 is required for replication-coupled ICL repair in S phase. Removal of FANCD2 from extracts inhibits both nucleolytic incisions near the ICL and translesion DNA synthesis past the lesion. Reversal of these defects requires ubiquitylated FANCI-FANCD2. Our results show that multiple steps of the essential S-phase ICL repair mechanism fail when the Fanconi anemia pathway is compromised.

  15. Development of the Modified Four Square Step Test and its reliability and validity in people with stroke.

    PubMed

    Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S

    2016-01-01

    Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).

  16. The Relationship Between Non-Symbolic Multiplication and Division in Childhood

    PubMed Central

    McCrink, Koleen; Shafto, Patrick; Barth, Hilary

    2016-01-01

    Children without formal education in addition and subtraction are able to perform multi-step operations over an approximate number of objects. Further, their performance improves when solving approximate (but not exact) addition and subtraction problems that allow for inversion as a shortcut (e.g., a + b − b = a). The current study examines children’s ability to perform multi-step operations, and the potential for an inversion benefit, for the operations of approximate, non-symbolic multiplication and division. Children were trained to compute a multiplication and division scaling factor (*2 or /2, *4 or /4), and then tested on problems that combined two of these factors in a way that either allowed for an inversion shortcut (e.g., 8 * 4 / 4) or did not (e.g., 8 * 4 / 2). Children’s performance was significantly better than chance for all scaling factors during training, and they successfully computed the outcomes of the multi-step testing problems. They did not exhibit a performance benefit for problems with the a * b / b structure, suggesting they did not draw upon inversion reasoning as a logical shortcut to help them solve the multi-step test problems. PMID:26880261

  17. Software forecasting as it is really done: A study of JPL software engineers

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.

    1993-01-01

    This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.

  18. The selection of adhesive systems for resin-based luting agents.

    PubMed

    Carville, Rebecca; Quinn, Frank

    2008-01-01

    The use of resin-based luting agents is ever expanding with the development of adhesive dentistry. A multitude of different adhesive systems are used with resin-based luting agents, and new products are introduced to the market frequently. Traditional adhesives generally required a multiple step bonding procedure prior to cementing with active resin-based luting materials; however, combined agents offer a simple application procedure. Self-etching 'all-in-one' systems claim that there is no need for the use of a separate adhesive process. The following review addresses the advantages and disadvantages of the available adhesive systems used with resin-based luting agents.

  19. Ligand Binding: Molecular Mechanics Calculation of the Streptavidin-Biotin Rupture Force

    NASA Astrophysics Data System (ADS)

    Grubmuller, Helmut; Heymann, Berthold; Tavan, Paul

    1996-02-01

    The force required to rupture the streptavidin-biotin complex was calculated here by computer simulations. The computed force agrees well with that obtained by recent single molecule atomic force microscope experiments. These simulations suggest a detailed multiple-pathway rupture mechanism involving five major unbinding steps. Binding forces and specificity are attributed to a hydrogen bond network between the biotin ligand and residues within the binding pocket of streptavidin. During rupture, additional water bridges substantially enhance the stability of the complex and even dominate the binding inter-actions. In contrast, steric restraints do not appear to contribute to the binding forces, although conformational motions were observed.

  20. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  1. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  2. Matrix-addressed analog ferroelectric memory

    NASA Astrophysics Data System (ADS)

    Lemons, R. A.; Grogan, J. K.; Thompson, J. S.

    1980-08-01

    A matrix addressed analog memory which uses multiple ferroelectric domain walls to address columns of words, is demonstrated. It is shown that the analog information is stored as a pattern in the metallization on the surface of the crystal, making a read-only memory. The pattern is done photolithographically in a way compatible with the simultaneous fabrication of many devices. Attention is given to the performance results, noting that the advantage of the device is that analog information can be stored with a high density in a single mask step. Finally, it is shown that potential applications are in systems which require repetitive output from a limited vocabulary of spoken words.

  3. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  4. Key Steps in the Special Review Process

    EPA Pesticide Factsheets

    EPA uses this process when it has reason to believe that the use of a pesticide may result in unreasonable adverse effects on people or the environment. Steps include comprehensive risk and benefit analyses and multiple Position Documents.

  5. Scanning sequences after Gibbs sampling to find multiple occurrences of functional elements

    PubMed Central

    Tharakaraman, Kannan; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L; Landsman, David; Spouge, John L

    2006-01-01

    Background Many DNA regulatory elements occur as multiple instances within a target promoter. Gibbs sampling programs for finding DNA regulatory elements de novo can be prohibitively slow in locating all instances of such an element in a sequence set. Results We describe an improvement to the A-GLAM computer program, which predicts regulatory elements within DNA sequences with Gibbs sampling. The improvement adds an optional "scanning step" after Gibbs sampling. Gibbs sampling produces a position specific scoring matrix (PSSM). The new scanning step resembles an iterative PSI-BLAST search based on the PSSM. First, it assigns an "individual score" to each subsequence of appropriate length within the input sequences using the initial PSSM. Second, it computes an E-value from each individual score, to assess the agreement between the corresponding subsequence and the PSSM. Third, it permits subsequences with E-values falling below a threshold to contribute to the underlying PSSM, which is then updated using the Bayesian calculus. A-GLAM iterates its scanning step to convergence, at which point no new subsequences contribute to the PSSM. After convergence, A-GLAM reports predicted regulatory elements within each sequence in order of increasing E-values, so users have a statistical evaluation of the predicted elements in a convenient presentation. Thus, although the Gibbs sampling step in A-GLAM finds at most one regulatory element per input sequence, the scanning step can now rapidly locate further instances of the element in each sequence. Conclusion Datasets from experiments determining the binding sites of transcription factors were used to evaluate the improvement to A-GLAM. Typically, the datasets included several sequences containing multiple instances of a regulatory motif. The improvements to A-GLAM permitted it to predict the multiple instances. PMID:16961919

  6. Transportation systems evaluation methodology development and applications, phase 3

    NASA Technical Reports Server (NTRS)

    Kuhlthau, A. R.; Jacobson, I. D.; Richards, L. C.

    1981-01-01

    Transportation systems or proposed changes in current systems are evaluated. Four principal evaluation criteria are incorporated in the process, operating performance characteristics as viewed by potential users, decisions based on the perceived impacts of the system, estimating what is required to reduce the system to practice; and predicting the ability of the concept to attract financial support. A series of matrix multiplications in which the various matrices represent evaluations in a logical sequence of the various discrete steps in a management decision process is used. One or more alternatives are compared with the current situation, and the result provides a numerical rating which determines the desirability of each alternative relative to the norm and to each other. The steps in the decision process are isolated so that contributions of each to the final result are readily analyzed. The ability to protect against bias on the part of the evaluators, and the fact that system parameters which are basically qualitative in nature can be easily included are advantageous.

  7. Synoptic reporting in tumor pathology: advantages of a web-based system.

    PubMed

    Qu, Zhenhong; Ninan, Shibu; Almosa, Ahmed; Chang, K G; Kuruvilla, Supriya; Nguyen, Nghia

    2007-06-01

    The American College of Surgeons Commission on Cancer (ACS-CoC) mandates that pathology reports at ACS-CoC-approved cancer programs include all scientifically validated data elements for each site and tumor specimen. The College of American Pathologists (CAP) has produced cancer checklists in static text formats to assist reporting. To be inclusive, the CAP checklists are pages long, requiring extensive text editing and multiple intermediate steps. We created a set of dynamic tumor-reporting templates, using Microsoft Active Server Page (ASP.NET), with drop-down list and data-compile features, and added a reminder function to indicate missing information. Users can access this system on the Internet, prepare the tumor report by selecting relevant data from drop-down lists with an embedded tumor staging scheme, and directly transfer the final report into a laboratory information system by using the copy-and-paste function. By minimizing extensive text editing and eliminating intermediate steps, this system can reduce reporting errors, improve work efficiency, and increase compliance.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  9. Recovery of permittivity and depth from near-field data as a step toward infrared nanotomography.

    PubMed

    Govyadinov, Alexander A; Mastel, Stefan; Golmar, Federico; Chuvilin, Andrey; Carney, P Scott; Hillenbrand, Rainer

    2014-07-22

    The increasing complexity of composite materials structured on the nanometer scale requires highly sensitive analytical tools for nanoscale chemical identification, ideally in three dimensions. While infrared near-field microscopy provides high chemical sensitivity and nanoscopic spatial resolution in two dimensions, the quantitative extraction of material properties of three-dimensionally structured samples has not been achieved yet. Here we introduce a method to perform rapid recovery of the thickness and permittivity of simple 3D structures (such as thin films and nanostructures) from near-field measurements, and provide its first experimental demonstration. This is accomplished via a novel nonlinear invertible model of the imaging process, taking advantage of the near-field data recorded at multiple harmonics of the oscillation frequency of the near-field probe. Our work enables quantitative nanoscale-resolved optical studies of thin films, coatings, and functionalization layers, as well as the structural analysis of multiphase materials, among others. It represents a major step toward the further goal of near-field nanotomography.

  10. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    PubMed

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  11. Human milk inactivates pathogens individually, additively, and synergistically.

    PubMed

    Isaacs, Charles E

    2005-05-01

    Breast-feeding can reduce the incidence and the severity of gastrointestinal and respiratory infections in the suckling neonate by providing additional protective factors to the infant's mucosal surfaces. Human milk provides protection against a broad array of infectious agents through redundancy. Protective factors in milk can target multiple early steps in pathogen replication and target each step with more than one antimicrobial compound. The antimicrobial activity in human milk results from protective factors working not only individually but also additively and synergistically. Lipid-dependent antimicrobial activity in milk results from the additive activity of all antimicrobial lipids and not necessarily the concentration of one particular lipid. Antimicrobial milk lipids and peptides can work synergistically to decrease both the concentrations of individual compounds required for protection and, as importantly, greatly reduce the time needed for pathogen inactivation. The more rapidly pathogens are inactivated the less likely they are to establish an infection. The total antimicrobial protection provided by human milk appears to be far more than can be elucidated by examining protective factors individually.

  12. Invariant U2 snRNA nucleotides form a stem loop to recognize the intron early in splicing

    PubMed Central

    Perriman, Rhonda; Ares, Manuel

    2010-01-01

    U2 snRNA-intron branchpoint pairing is a critical step in pre-mRNA recognition by the splicing apparatus, but the mechanism by which these two RNAs engage each other is unknown. Here we identify a new U2 snRNA structure, the branchpoint interaction stem-loop (BSL), that presents the U2 nucleotides that will contact the intron. We provide evidence that the BSL forms prior to interaction with the intron, and is disrupted by the DExD/H protein Prp5p during engagement of the snRNA with the intron. In vitro splicing complex assembly in a BSL-destabilized mutant extract suggests that the BSL is required at a previously unrecognized step between commitment complex and prespliceosome formation. The extreme evolutionary conservation of the BSL suggests it represents an ancient structural solution to the problem of intron branchpoint recognition by dynamic RNA elements that must serve multiple functions at other times during splicing. PMID:20471947

  13. Cardiovascular risk in rheumatoid arthritis: assessment, management and next steps

    PubMed Central

    Zegkos, Thomas; Kitas, George; Dimitroulas, Theodoros

    2016-01-01

    Rheumatoid arthritis (RA) is associated with increased cardiovascular (CV) morbidity and mortality which cannot be fully explained by traditional CV risk factors; cumulative inflammatory burden and antirheumatic medication-related cardiotoxicity seem to be important contributors. Despite the acknowledgment and appreciation of CV disease burden in RA, optimal management of individuals with RA represents a challenging task which remains suboptimal. To address this need, the European League Against Rheumatism (EULAR) published recommendations suggesting the adaptation of traditional risk scores by using a multiplication factor of 1.5 if two of three specific criteria are fulfilled. Such guidance requires proper coordination of several medical specialties, including general practitioners, rheumatologists, cardiologists, exercise physiologists and psychologists to achieve a desirable result. Tight control of disease activity, management of traditional risk factors and lifestyle modification represent, amongst others, the most important steps in improving CV disease outcomes in RA patients. Rather than enumerating studies and guidelines, this review attempts to critically appraise current literature, highlighting future perspectives of CV risk management in RA. PMID:27247635

  14. Omni Directional Multimaterial Soft Cylindrical Actuator and Its Application as a Steerable Catheter.

    PubMed

    Gul, Jahan Zeb; Yang, Young Jin; Su, Kim Young; Choi, Kyung Hyun

    2017-09-01

    Soft actuators with complex range of motion lead to strong interest in applying devices like biomedical catheters and steerable soft pipe inspectors. To facilitate the use of soft actuators in devices where controlled, complex, precise, and fast motion is required, a structurally controlled Omni directional soft cylindrical actuator is fabricated in a modular way using multilayer composite of polylactic acid based conductive Graphene, shape memory polymer, shape memory alloy, and polyurethane. Multiple fabrication techniques are discussed step by step that mainly include fused deposition modeling based 3D printing, dip coating, and UV curing. A mathematical control model is used to generate patterned electrical signals for the Omni directional deformations. Characterizations like structural control, bending, recovery, path, and thermal effect are carried out with and without load (10 g) to verify the new cylindrical design concept. Finally, the application of Omni directional actuator as a steerable catheter is explored by fabricating a scaled version of carotid artery through 3D printing using a semitransparent material.

  15. From Informal Safety-Critical Requirements to Property-Driven Formal Validation

    NASA Technical Reports Server (NTRS)

    Cimatti, Alessandro; Roveri, Marco; Susi, Angelo; Tonetta, Stefano

    2008-01-01

    Most of the efforts in formal methods have historically been devoted to comparing a design against a set of requirements. The validation of the requirements themselves, however, has often been disregarded, and it can be considered a largely open problem, which poses several challenges. The first challenge is given by the fact that requirements are often written in natural language, and may thus contain a high degree of ambiguity. Despite the progresses in Natural Language Processing techniques, the task of understanding a set of requirements cannot be automatized, and must be carried out by domain experts, who are typically not familiar with formal languages. Furthermore, in order to retain a direct connection with the informal requirements, the formalization cannot follow standard model-based approaches. The second challenge lies in the formal validation of requirements. On one hand, it is not even clear which are the correctness criteria or the high-level properties that the requirements must fulfill. On the other hand, the expressivity of the language used in the formalization may go beyond the theoretical and/or practical capacity of state-of-the-art formal verification. In order to solve these issues, we propose a new methodology that comprises of a chain of steps, each supported by a specific tool. The main steps are the following. First, the informal requirements are split into basic fragments, which are classified into categories, and dependency and generalization relationships among them are identified. Second, the fragments are modeled using a visual language such as UML. The UML diagrams are both syntactically restricted (in order to guarantee a formal semantics), and enriched with a highly controlled natural language (to allow for modeling static and temporal constraints). Third, an automatic formal analysis phase iterates over the modeled requirements, by combining several, complementary techniques: checking consistency; verifying whether the requirements entail some desirable properties; verify whether the requirements are consistent with selected scenarios; diagnosing inconsistencies by identifying inconsistent cores; identifying vacuous requirements; constructing multiple explanations by enabling the fault-tree analysis related to particular fault models; verifying whether the specification is realizable.

  16. Technique for predicting ground-water discharge to surface coal mines and resulting changes in head

    USGS Publications Warehouse

    Weiss, L.S.; Galloway, D.L.; Ishii, Audrey L.

    1986-01-01

    Changes in seepage flux and head (groundwater level) from groundwater drainage into a surface coal mine can be predicted by a technique that considers drainage from the unsaturated zone. The user applies site-specific data to precalculated head and seepage-flux profiles. Groundwater flow through hypothetical aquifer cross sections was simulated using the U.S. Geological Survey finite-difference model, VS2D, which considers variably saturated two-dimensional flow. Conceptual models considered were (1) drainage to a first cut, and (2) drainage to multiple cuts, which includes drainage effects of an area surface mine. Dimensionless head and seepage flux profiles from 246 simulations are presented. Step-by-step instructions and examples are presented. Users are required to know aquifer characteristics and to estimate size and timing of the mine operation at a proposed site. Calculated groundwater drainage to the mine is from one excavated face only. First cut considers confined and unconfined aquifers of a wide range of permeabilities; multiple cuts considers unconfined aquifers of higher permeabilities only. The technique, developed for Illinois coal-mining regions that use area surface mining and evaluated with an actual field example, will be useful in assessing potential hydrologic impacts of mining. Application is limited to hydrogeologic settings and mine operations similar to those considered. Fracture flow, recharge, and leakage are nor considered. (USGS)

  17. Indoleamine 2,3 Dioxygenase as a Potential Therapeutic Target in Huntington's Disease.

    PubMed

    Mazarei, Gelareh; Leavitt, Blair R

    2015-01-01

    Within the past decade, there has been increasing interest in the role of tryptophan (Trp) metabolites and the kynurenine pathway (KP) in diseases of the brain such as Huntington's disease (HD). Evidence is accumulating to suggest that this pathway is imbalanced in neurologic disease states. The KP diverges into two branches that can lead to production of either neuroprotective or neurotoxic metabolites. In one branch, kynurenine (Kyn) produced as a result of tryptophan (Trp) catabolism is further metabolized to neurotoxic metabolites such as 3-hydroxykunurenine (3-HK) and quinolinic acid (QA). In the other branch, Kyn is converted to the neuroprotective metabolite kynurenic acid (KA). The enzyme Indoleamine 2,3 dioxygenase (IDO1) catalyzes the conversion of Trp into Kyn, the first and rate-limiting enzymatic step of the KP. This reaction takes place throughout the body in multiple cell types as a required step in the degradation of the essential amino acid Trp. Studies of IDO1 in brain have focused primarily on a potential role in depression, immune tolerance associated with brain tumours, and multiple sclerosis; however the role of this enzyme in neurodegenerative disease has garnered significant attention in recent years. This review will provide a summary of the current understanding of the role of IDO1 in Huntington's disease and will assess this enzyme as a potential therapeutic target for HD.

  18. Melatonin biosynthesis in plants: multiple pathways catalyze tryptophan to melatonin in the cytoplasm or chloroplasts.

    PubMed

    Back, Kyoungwhan; Tan, Dun-Xian; Reiter, Russel J

    2016-11-01

    Melatonin is an animal hormone as well as a signaling molecule in plants. It was first identified in plants in 1995, and almost all enzymes responsible for melatonin biosynthesis had already been characterized in these species. Melatonin biosynthesis from tryptophan requires four-step reactions. However, six genes, that is, TDC, TPH, T5H, SNAT, ASMT, and COMT, have been implicated in the synthesis of melatonin in plants, suggesting the presence of multiple pathways. Two major pathways have been proposed based on the enzyme kinetics: One is the tryptophan/tryptamine/serotonin/N-acetylserotonin/melatonin pathway, which may occur under normal growth conditions; the other is the tryptophan/tryptamine/serotonin/5-methoxytryptamine/melatonin pathway, which may occur when plants produce large amounts of serotonin, for example, upon senescence. The melatonin biosynthetic capacity associated with conversion of tryptophan to serotonin is much higher than that associated with conversion of serotonin to melatonin, which yields a low level of melatonin synthesis in plants. Many melatonin intermediates are produced in various subcellular compartments, such as the cytoplasm, endoplasmic reticulum, and chloroplasts, which either facilitates or impedes the subsequent enzymatic steps. Depending on the pathways, the final subcellular sites of melatonin synthesis vary at either the cytoplasm or chloroplasts, which may differentially affect the mode of action of melatonin in plants. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Interfacial thiol-ene photoclick reactions for forming multilayer hydrogels.

    PubMed

    Shih, Han; Fraser, Andrew K; Lin, Chien-Chi

    2013-03-13

    Interfacial visible light-mediated thiol-ene photoclick reactions were developed for preparing step-growth hydrogels with multilayer structures. The effect of a noncleavage type photoinitiator eosin-Y on visible-light-mediated thiol-ene photopolymerization was first characterized using in situ photorheometry, gel fraction, and equilibrium swelling ratio. Next, spectrophotometric properties of eosin-Y in the presence of various relevant macromer species were evaluated using ultraviolet-visible light (UV-vis) spectrometry. It was determined that eosin-Y was able to reinitiate the thiol-ene photoclick reaction, even after light exposure. Because of its small molecular weight, most eosin-Y molecules readily leached out from the hydrogels. The diffusion of residual eosin-Y from preformed hydrogels was exploited for fabricating multilayer step-growth hydrogels. Interfacial hydrogel coating was formed via the same visible-light-mediated gelation mechanism without adding fresh initiator. The thickness of the thiol-ene gel coating could be easily controlled by adjusting visible light exposure time, eosin-Y concentration initially loaded in the core gel, or macromer concentration in the coating solution. The major benefits of this interfacial thiol-ene coating system include its simplicity and cytocompatibility. The formation of thiol-ene hydrogels and coatings neither requires nor generates any cytotoxic components. This new gelation chemistry may have great utilities in controlled release of multiple sensitive growth factors and encapsulation of multiple cell types for tissue regeneration.

  20. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  1. [Implementation of a rational standard of hygiene for preparation of operating rooms].

    PubMed

    Bauer, M; Scheithauer, S; Moerer, O; Pütz, H; Sliwa, B; Schmidt, C E; Russo, S G; Waeschle, R M

    2015-10-01

    The assurance of high standards of care is a major requirement in German hospitals while cost reduction and efficient use of resources are mandatory. These requirements are particularly evident in the high-risk and cost-intensive operating theatre field with multiple process steps. The cleaning of operating rooms (OR) between surgical procedures is of major relevance for patient safety and requires time and human resources. The hygiene procedure plan for OR cleaning between operations at the university hospital in Göttingen was revised and optimized according to the plan-do-check-act principle due to not clearly defined specifications of responsibilities, use of resources, prolonged process times and increased staff engagement. The current status was evaluated in 2012 as part of the first step "plan". The subsequent step "do" included an expert symposium with external consultants, interdisciplinary consensus conferences with an actualization of the former hygiene procedure plan and the implementation process. All staff members involved were integrated into this management change process. The penetration rate of the training and information measures as well as the acceptance and compliance with the new hygiene procedure plan were reviewed within step "check". The rates of positive swabs and air sampling as well as of postoperative wound infections were analyzed for quality control and no evidence for a reduced effectiveness of the new hygiene plan was found. After the successful implementation of these measures the next improvement cycle ("act") was performed in 2014 which led to a simplification of the hygiene plan by reduction of the number of defined cleaning and disinfection programs for preparation of the OR. The reorganization measures described led to a comprehensive commitment of the hygiene procedure plan by distinct specifications for responsibilities, for the course of action and for the use of resources. Furthermore, a simplification of the plan, a rational staff assignment and reduced process times were accomplished. Finally, potential conflicts due to an insufficient evidence-based knowledge of personnel was reduced. This present project description can be used by other hospitals as a guideline for similar changes in management processes.

  2. Predicting falls in older adults using the four square step test.

    PubMed

    Cleary, Kimberly; Skornyakov, Elena

    2017-10-01

    The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.

  3. Multiple cis-acting signals, some weak by necessity, collectively direct robust transport of oskar mRNA to the oocyte.

    PubMed

    Ryu, Young Hee; Kenny, Andrew; Gim, Youme; Snee, Mark; Macdonald, Paul M

    2017-09-15

    Localization of mRNAs can involve multiple steps, each with its own cis -acting localization signals and transport factors. How is the transition between different steps orchestrated? We show that the initial step in localization of Drosophila oskar mRNA - transport from nurse cells to the oocyte - relies on multiple cis -acting signals. Some of these are binding sites for the translational control factor Bruno, suggesting that Bruno plays an additional role in mRNA transport. Although transport of oskar mRNA is essential and robust, the localization activity of individual transport signals is weak. Notably, increasing the strength of individual transport signals, or adding a strong transport signal, disrupts the later stages of oskar mRNA localization. We propose that the oskar transport signals are weak by necessity; their weakness facilitates transfer of the oskar mRNA from the oocyte transport machinery to the machinery for posterior localization. © 2017. Published by The Company of Biologists Ltd.

  4. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  5. Metastasis Suppressor Genes: At the Interface Between the Environment and Tumor Cell Growth

    PubMed Central

    Hurst, Douglas R.; Welch, Danny R.

    2013-01-01

    The molecular mechanisms and genetic programs required for cancer metastasis are sometimes overlapping, but components are clearly distinct from those promoting growth of a primary tumor. Every sequential, rate-limiting step in the sequence of events leading to metastasis requires coordinated expression of multiple genes, necessary signaling events, and favorable environmental conditions or the ability to escape negative selection pressures. Metastasis suppressors are molecules that inhibit the process of metastasis without preventing growth of the primary tumor. The cellular processes regulated by metastasis suppressors are diverse and function at every step in the metastatic cascade. As we gain knowledge into the molecular mechanisms of metastasis suppressors and cofactors with which they interact, we learn more about the process, including appreciation that some are potential targets for therapy of metastasis, the most lethal aspect of cancer. Until now, metastasis suppressors have been described largely by their function. With greater appreciation of their biochemical mechanisms of action, the importance of context is increasingly recognized especially since tumor cells exist in myriad microenvironments. In this review, we assemble the evidence that selected molecules are indeed suppressors of metastasis, collate the data defining the biochemical mechanisms of action, and glean insights regarding how metastasis suppressors regulate tumor cell communication to–from microenvironments. PMID:21199781

  6. Strategies for Stabilizing Nitrogenous Compounds in ECLSS Wastewater: Top-Down System Design and Unit Operation Selection with Focus on Bio-Regenerative Processes for Short and Long Term Scenarios

    NASA Technical Reports Server (NTRS)

    Lunn, Griffin M.

    2011-01-01

    Water recycling and eventual nutrient recovery is crucial for surviving in or past low earth orbit. New approaches and syste.m architecture considerations need to be addressed to meet current and future system requirements. This paper proposes a flexible system architecture that breaks down pretreatment , steps into discrete areas where multiple unit operations can be considered. An overview focusing on the urea and ammonia conversion steps allows an analysis on each process's strengths and weaknesses and synergy with upstream and downstream processing. Process technologies to be covered include chemical pretreatment, biological urea hydrolysis, chemical urea hydrolysis, combined nitrification-denitrification, nitrate nitrification, anammox denitrification, and regenerative ammonia absorption through struvite formation. Biological processes are considered mainly for their ability to both maximize water recovery and to produce nutrients for future plant systems. Unit operations can be considered for traditional equivalent system mass requirements in the near term or what they can provide downstream in the form of usable chemicals or nutrients for the long term closed-loop ecological control and life support system. Optimally this would allow a system to meet the former but to support the latter without major modification.

  7. A Versatile Microfluidic Device for Automating Synthetic Biology.

    PubMed

    Shih, Steve C C; Goyal, Garima; Kim, Peter W; Koutsoubelis, Nicolas; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Singh, Anup K

    2015-10-16

    New microbes are being engineered that contain the genetic circuitry, metabolic pathways, and other cellular functions required for a wide range of applications such as producing biofuels, biobased chemicals, and pharmaceuticals. Although currently available tools are useful in improving the synthetic biology process, further improvements in physical automation would help to lower the barrier of entry into this field. We present an innovative microfluidic platform for assembling DNA fragments with 10× lower volumes (compared to that of current microfluidic platforms) and with integrated region-specific temperature control and on-chip transformation. Integration of these steps minimizes the loss of reagents and products compared to that with conventional methods, which require multiple pipetting steps. For assembling DNA fragments, we implemented three commonly used DNA assembly protocols on our microfluidic device: Golden Gate assembly, Gibson assembly, and yeast assembly (i.e., TAR cloning, DNA Assembler). We demonstrate the utility of these methods by assembling two combinatorial libraries of 16 plasmids each. Each DNA plasmid is transformed into Escherichia coli or Saccharomyces cerevisiae using on-chip electroporation and further sequenced to verify the assembly. We anticipate that this platform will enable new research that can integrate this automated microfluidic platform to generate large combinatorial libraries of plasmids and will help to expedite the overall synthetic biology process.

  8. Investigating mycobacterial topoisomerase I mechanism from the analysis of metal and DNA substrate interactions at the active site.

    PubMed

    Cao, Nan; Tan, Kemin; Annamalai, Thirunavukkarasu; Joachimiak, Andrzej; Tse-Dinh, Yuk-Ching

    2018-06-14

    We have obtained new crystal structures of Mycobacterium tuberculosis topoisomerase I, including structures with ssDNA substrate bound to the active site, with and without Mg2+ ion present. Significant enzyme conformational changes upon DNA binding place the catalytic tyrosine in a pre-transition state position for cleavage of a specific phosphodiester linkage. Meanwhile, the enzyme/DNA complex with bound Mg2+ ion may represent the post-transition state for religation in the enzyme's multiple-step DNA relaxation catalytic cycle. The first observation of Mg2+ ion coordinated with the TOPRIM residues and DNA phosphate in a type IA topoisomerase active site allows assignment of likely catalytic role for the metal and draws a comparison to the proposed mechanism for type IIA topoisomerases. The critical function of a strictly conserved glutamic acid in the DNA cleavage step was assessed through site-directed mutagenesis. The functions assigned to the observed Mg2+ ion can account for the metal requirement for DNA rejoining but not DNA cleavage by type IA topoisomerases. This work provides new structural insights into a more stringent requirement for DNA rejoining versus cleavage in the catalytic cycle of this essential enzyme, and further establishes the potential for selective interference of DNA rejoining by this validated TB drug target.

  9. Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005

    USGS Publications Warehouse

    Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.

    2012-01-01

    As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.

  10. Improvement of surgical margin with a coupled saline-radio-frequency device for multiple colorectal liver metastases.

    PubMed

    Ogata, Satoshi; Kianmanesh, Reza; Varma, Deepak; Belghiti, Jacques

    2005-01-01

    Complete resection of colorectal liver metastases (LM) has been the only curative treatment. However, when LM are multiple and bilobar, only a few patients are candidates for curative surgery. We report on a 53-year-old woman with synchronous multiple and bilobar LM from sigmoidal cancer who became resectable after a multimodal strategy including preoperative systemic chemotherapy and two-step surgery. The spectacular decrease in tumor size after systemic chemotherapy led us to perform two-step surgery, including right portal-vein ligation and left liver metastasectomies, with a coupled saline-radiofrequency device, in order to improve the surgical margin. An extended right hepatectomy was performed later to remove the remaining right liver lesions. The patient was discharged after 28 days without major complication and was recurrence-free 14 months later. We conclude that improving the surgical margin with a coupled saline-radiofrequency device is feasible and effective, avoiding small remnant liver even after multiple tumorectomies. The multimodal strategy, including preoperative chemotherapy, two-step surgery, and tumorectomies, using a coupled saline-radiofrequency device, could increase the number of patients with diffuse bilobar liver metastases who can benefit from liver resection.

  11. An evaluation of children's metered-dose inhaler technique for asthma medications.

    PubMed

    Burkhart, Patricia V; Rayens, Mary Kay; Bowman, Roxanne K

    2005-03-01

    Regardless of the medication delivery system, health care providers need to teach accurate medication administration techniques to their patients, educate them about the particular nuances of the prescribed delivery system (eg, proper storage), and reinforce these issues at each health encounter. A single instruction session is not sufficient to maintain appropriate inhaler techniques for patients who require continued use. Providing written steps for the administration technique is helpful so that the patient can refer to them later when using the medication. The National Heart, Lung, and Blood Institute's "Practical Guide for the Diagnosis and Management of Asthma" recommends that practitioners follow these steps for effective inhaler technique training when first prescribing an inhaler: 1. Teach patients the steps and give written instruction handouts. 2. Demonstrate how to use the inhaler step-by-step. 3. Ask patients to demonstrate how to use the inhaler. Let the patient refer to the handout on the first training. Then use the handout asa checklist to assess the patient's future technique. 4. Provide feedback to patients about what they did right and what they need to improve. Have patients demonstrate their technique again, if necessary. The last two steps should be performed (ie, demonstration and providing feedback on what patients did right and what they need to improve) at every subsequent visit. If the patient makes multiple errors, it is advisable to focus on improving one or two key steps at a time. With improvements in drug delivery come challenges, necessitating that practitioners stay current with new medication administration techniques. Teaching and reinforcing accurate technique at each health care encounter are critical to help ensure medication efficacy for patients with asthma. Since one fifth of children in the study performed incorrect medication technique even after education, checklists of steps for the correct use of inhalation devices, such as those provided in this article, should be given to patients for home use and for use by clinicians to evaluate patient technique at each health encounter.

  12. Validation of Foot Placement Locations from Ankle Data of a Kinect v2 Sensor

    PubMed Central

    Geerse, Daphne; Coolen, Bert; Kolijn, Detmar; Roerdink, Melvyn

    2017-01-01

    The Kinect v2 sensor may be a cheap and easy to use sensor to quantify gait in clinical settings, especially when applied in set-ups integrating multiple Kinect sensors to increase the measurement volume. Reliable estimates of foot placement locations are required to quantify spatial gait parameters. This study aimed to systematically evaluate the effects of distance from the sensor, side and step length on estimates of foot placement locations based on Kinect’s ankle body points. Subjects (n = 12) performed stepping trials at imposed foot placement locations distanced 2 m or 3 m from the Kinect sensor (distance), for left and right foot placement locations (side), and for five imposed step lengths. Body points’ time series of the lower extremities were recorded with a Kinect v2 sensor, placed frontoparallelly on the left side, and a gold-standard motion-registration system. Foot placement locations, step lengths, and stepping accuracies were compared between systems using repeated-measures ANOVAs, agreement statistics and two one-sided t-tests to test equivalence. For the right side at the 2 m distance from the sensor we found significant between-systems differences in foot placement locations and step lengths, and evidence for nonequivalence. This distance by side effect was likely caused by differences in body orientation relative to the Kinect sensor. It can be reduced by using Kinect’s higher-dimensional depth data to estimate foot placement locations directly from the foot’s point cloud and/or by using smaller inter-sensor distances in the case of a multi-Kinect v2 set-up to estimate foot placement locations at greater distances from the sensor. PMID:28994731

  13. Validation of Foot Placement Locations from Ankle Data of a Kinect v2 Sensor.

    PubMed

    Geerse, Daphne; Coolen, Bert; Kolijn, Detmar; Roerdink, Melvyn

    2017-10-10

    The Kinect v2 sensor may be a cheap and easy to use sensor to quantify gait in clinical settings, especially when applied in set-ups integrating multiple Kinect sensors to increase the measurement volume. Reliable estimates of foot placement locations are required to quantify spatial gait parameters. This study aimed to systematically evaluate the effects of distance from the sensor, side and step length on estimates of foot placement locations based on Kinect's ankle body points. Subjects (n = 12) performed stepping trials at imposed foot placement locations distanced 2 m or 3 m from the Kinect sensor (distance), for left and right foot placement locations (side), and for five imposed step lengths. Body points' time series of the lower extremities were recorded with a Kinect v2 sensor, placed frontoparallelly on the left side, and a gold-standard motion-registration system. Foot placement locations, step lengths, and stepping accuracies were compared between systems using repeated-measures ANOVAs, agreement statistics and two one-sided t -tests to test equivalence. For the right side at the 2 m distance from the sensor we found significant between-systems differences in foot placement locations and step lengths, and evidence for nonequivalence. This distance by side effect was likely caused by differences in body orientation relative to the Kinect sensor. It can be reduced by using Kinect's higher-dimensional depth data to estimate foot placement locations directly from the foot's point cloud and/or by using smaller inter-sensor distances in the case of a multi-Kinect v2 set-up to estimate foot placement locations at greater distances from the sensor.

  14. Engineering Specification for Large-aperture UVO Space Telescopes Derived from Science Requirements

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Postman, Marc; Smith, W. Scott

    2013-01-01

    The Advance Mirror Technology Development (AMTD) project is a three year effort initiated in FY12 to mature by at least a half TRL step six critical technologies required to enable 4 to 8 meter UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND result in a high-performance low-cost low-risk system. To provide the science community with options, we are pursuing multiple technology paths. We have assembled an outstanding team from academia, industry, and government with extensive expertise in astrophysics and exoplanet characterization, and in the design/manufacture of monolithic and segmented space telescopes. A key accomplishment is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints.

  15. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  16. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  17. Targeted quantitative analysis of Streptococcus pyogenes virulence factors by multiple reaction monitoring.

    PubMed

    Lange, Vinzenz; Malmström, Johan A; Didion, John; King, Nichole L; Johansson, Björn P; Schäfer, Juliane; Rameseder, Jonathan; Wong, Chee-Hong; Deutsch, Eric W; Brusniak, Mi-Youn; Bühlmann, Peter; Björck, Lars; Domon, Bruno; Aebersold, Ruedi

    2008-08-01

    In many studies, particularly in the field of systems biology, it is essential that identical protein sets are precisely quantified in multiple samples such as those representing differentially perturbed cell states. The high degree of reproducibility required for such experiments has not been achieved by classical mass spectrometry-based proteomics methods. In this study we describe the implementation of a targeted quantitative approach by which predetermined protein sets are first identified and subsequently quantified at high sensitivity reliably in multiple samples. This approach consists of three steps. First, the proteome is extensively mapped out by multidimensional fractionation and tandem mass spectrometry, and the data generated are assembled in the PeptideAtlas database. Second, based on this proteome map, peptides uniquely identifying the proteins of interest, proteotypic peptides, are selected, and multiple reaction monitoring (MRM) transitions are established and validated by MS2 spectrum acquisition. This process of peptide selection, transition selection, and validation is supported by a suite of software tools, TIQAM (Targeted Identification for Quantitative Analysis by MRM), described in this study. Third, the selected target protein set is quantified in multiple samples by MRM. Applying this approach we were able to reliably quantify low abundance virulence factors from cultures of the human pathogen Streptococcus pyogenes exposed to increasing amounts of plasma. The resulting quantitative protein patterns enabled us to clearly define the subset of virulence proteins that is regulated upon plasma exposure.

  18. Systematic framework to evaluate the status of physical activity research for persons with multiple sclerosis.

    PubMed

    Dixon-Ibarra, Alicia; Vanderbom, Kerri; Dugala, Anisia; Driver, Simon

    2014-04-01

    Exploring the current state of health behavior research for individuals with multiple sclerosis is essential to understanding the next steps required to reducing preventable disability. A way to link research to translational health promotion programs is by utilizing the Behavioral Epidemiological Framework, which describes a sequence of phases used to categorize health-related behavioral research. This critical audit of the literature examines the current state of physical activity research for persons with multiple sclerosis by utilizing the proposed Behavioral Epidemiological Framework. After searching MEDLINE, PUBMED, PsycINFO, Google Scholar and several major areas within EBSCOHOST (2000 to present), retrieved articles were categorized according to the framework phases and coding rules. Of 139 articles, 49% were in phase 1 (establishing links between behavior and health), 18% phase 2 (developing methods for measuring behavior), 24% phase 3 (identifying factors influencing behavior and implications for theory), and 9% phase 4 and 5 (evaluating interventions to change behavior and translating research into practice). Emphasis on phase 1 research indicates the field is in its early stages of development. Providing those with multiple sclerosis with necessary tools through health promotion programs is needed to reduce secondary conditions and co-morbidities. Reassessment of the field of physical activity and multiple sclerosis in the future could provide insight into whether the field is evolving over time or remaining stagnant. Published by Elsevier Inc.

  19. Molecular study on some antibiotic resistant genes in Salmonella spp. isolates

    NASA Astrophysics Data System (ADS)

    Nabi, Ari Q.

    2017-09-01

    Studying the genes related with antimicrobial resistance in Salmonella spp. is a crucial step toward a correct and faster treatment of infections caused by the pathogen. In this work Integron mediated antibiotic resistant gene IntI1 (Class I Integrase IntI1) and some plasmid mediated antibiotic resistance genes (Qnr) were scanned among the isolated non-Typhoid Salmonellae strains with known resistance to some important antimicrobial drugs using Sybr Green real time PCR. The aim of the study was to correlate the multiple antibiotics and antimicrobial resistance of Salmonella spp. with the presence of integrase (IntI1) gene and plasmid mediated quinolone resistant genes. Results revealed the presence of Class I Integrase gene in 76% of the isolates with confirmed multiple antibiotic resistances. Moreover, about 32% of the multiple antibiotic resistant serotypes showed a positive R-PCR for plasmid mediated qnrA gene encoding for nalidixic acid and ciprofloxacin resistance. No positive results could be revealed form R-PCRs targeting qnrB or qnrS. In light of these results we can conclude that the presence of at least one of the qnr genes and/or the presence of Integrase Class I gene were responsible for the multiple antibiotic resistance to for nalidixic acid and ciprofloxacin from the studied Salmonella spp. and further studies required to identify the genes related with multiple antibiotic resistance of the pathogen.

  20. Stochastic quasi-Newton molecular simulations

    NASA Astrophysics Data System (ADS)

    Chau, C. D.; Sevink, G. J. A.; Fraaije, J. G. E. M.

    2010-08-01

    We report a new and efficient factorized algorithm for the determination of the adaptive compound mobility matrix B in a stochastic quasi-Newton method (S-QN) that does not require additional potential evaluations. For one-dimensional and two-dimensional test systems, we previously showed that S-QN gives rise to efficient configurational space sampling with good thermodynamic consistency [C. D. Chau, G. J. A. Sevink, and J. G. E. M. Fraaije, J. Chem. Phys. 128, 244110 (2008)10.1063/1.2943313]. Potential applications of S-QN are quite ambitious, and include structure optimization, analysis of correlations and automated extraction of cooperative modes. However, the potential can only be fully exploited if the computational and memory requirements of the original algorithm are significantly reduced. In this paper, we consider a factorized mobility matrix B=JJT and focus on the nontrivial fundamentals of an efficient algorithm for updating the noise multiplier J . The new algorithm requires O(n2) multiplications per time step instead of the O(n3) multiplications in the original scheme due to Choleski decomposition. In a recursive form, the update scheme circumvents matrix storage and enables limited-memory implementation, in the spirit of the well-known limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method, allowing for a further reduction of the computational effort to O(n) . We analyze in detail the performance of the factorized (FSU) and limited-memory (L-FSU) algorithms in terms of convergence and (multiscale) sampling, for an elementary but relevant system that involves multiple time and length scales. Finally, we use this analysis to formulate conditions for the simulation of the complex high-dimensional potential energy landscapes of interest.

  1. Assessing the Effectiveness of First Step to Success: Are Short-Term Results the First Step to Long-Term Behavioral Improvements?

    ERIC Educational Resources Information Center

    Sumi, W. Carl; Woodbridge, Michelle W.; Javitz, Harold S.; Thornton, S. Patrick; Wagner, Mary; Rouspil, Kristen; Yu, Jennifer W.; Seeley, John R.; Walker, Hill M.; Golly, Annemieke; Small, Jason W.; Feil, Edward G.; Severson, Herbert H.

    2013-01-01

    This article reports on the effectiveness of First Step to Success, a secondary-level intervention appropriate for students in early elementary school who experience moderate to severe behavior problems and are at risk for academic failure. The authors demonstrate the intervention's short-term effects on multiple behavioral and academic outcomes…

  2. Hot working behavior of selective laser melted and laser metal deposited Inconel 718

    NASA Astrophysics Data System (ADS)

    Bambach, Markus; Sizova, Irina

    2018-05-01

    The production of Nickel-based high-temperature components is of great importance for the transport and energy sector. Forging of high-temperature alloys often requires expensive dies, multiple forming steps and leads to forged parts with tolerances that require machining to create the final shape and a large amount of scrap. Additive manufacturing offers the possibility to print the desired shapes directly as net-shape components, requiring only little additional effort in machining. Especially for high-temperature alloys carrying a large amount of energy per unit mass, additive manufacturing could be more energy-efficient than forging if the energy contained in the machining scrap exceeds the energy needed for powder production and laser processing. However, the microstructure and performance of 3d-printed parts will not reach the level of forged material unless further expensive processes such as hot-isostatic pressing are used. Using the design freedom and possibilities to locally engineer material, additive manufacturing could be combined with forging operations to novel process chains, offering the possibility to reduce the number of forging steps and to create near-net shape forgings with desired local properties. Some innovative process chains combining additive manufacturing and forging have been patented recently, but almost no scientific knowledge on the workability of 3D printed preforms exists. The present study investigates the flow stress and microstructure evolution during hot working of pre-forms produced by laser powder deposition and selective laser melting (Figure 1) and puts forward a model for the flow stress.

  3. Optimized breeding strategies for multiple trait integration: II. Process efficiency in event pyramiding and trait fixation.

    PubMed

    Peng, Ting; Sun, Xiaochun; Mumm, Rita H

    2014-01-01

    Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.

  4. Midbody Targeting of the ESCRT Machinery by a Noncanonical Coiled Coil in CEP55

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyung Ho; Elia, Natalie; Ghirlando, Rodolfo

    2008-11-14

    The ESCRT (endosomal sorting complex required for transport) machinery is required for the scission of membrane necks in processes including the budding of HIV-1 and cytokinesis. An essential step in cytokinesis is recruitment of the ESCRT-I complex and the ESCRT-associated protein ALIX to the midbody (the structure that tethers two daughter cells) by the protein CEP55. Biochemical experiments show that peptides from ALIX and the ESCRT-I subunit TSG101 compete for binding to the ESCRT and ALIX-binding region (EABR) of CEP55. We solved the crystal structure of EABR bound to an ALIX peptide at a resolution of 2.0 angstroms. The structuremore » shows that EABR forms an aberrant dimeric parallel coiled coil. Bulky and charged residues at the interface of the two central heptad repeats create asymmetry and a single binding site for an ALIX or TSG101 peptide. Both ALIX and ESCRT-I are required for cytokinesis, which suggests that multiple CEP55 dimers are required for function.« less

  5. Altering the selection capabilities of common cloning vectors via restriction enzyme mediated gene disruption

    PubMed Central

    2013-01-01

    Background The cloning of gene sequences forms the basis for many molecular biological studies. One important step in the cloning process is the isolation of bacterial transformants carrying vector DNA. This involves a vector-encoded selectable marker gene, which in most cases, confers resistance to an antibiotic. However, there are a number of circumstances in which a different selectable marker is required or may be preferable. Such situations can include restrictions to host strain choice, two phase cloning experiments and mutagenesis experiments, issues that result in additional unnecessary cloning steps, in which the DNA needs to be subcloned into a vector with a suitable selectable marker. Results We have used restriction enzyme mediated gene disruption to modify the selectable marker gene of a given vector by cloning a different selectable marker gene into the original marker present in that vector. Cloning a new selectable marker into a pre-existing marker was found to change the selection phenotype conferred by that vector, which we were able to demonstrate using multiple commonly used vectors and multiple resistance markers. This methodology was also successfully applied not only to cloning vectors, but also to expression vectors while keeping the expression characteristics of the vector unaltered. Conclusions Changing the selectable marker of a given vector has a number of advantages and applications. This rapid and efficient method could be used for co-expression of recombinant proteins, optimisation of two phase cloning procedures, as well as multiple genetic manipulations within the same host strain without the need to remove a pre-existing selectable marker in a previously genetically modified strain. PMID:23497512

  6. Immune Tolerance in Multiple Sclerosis

    PubMed Central

    Goverman, Joan M.

    2011-01-01

    Summary Multiple sclerosis is believed to be mediated by T cells specific for myelin antigens that circulate harmlessly in the periphery of healthy individuals until they are erroneously by an environmental stimulus. Upon activation, the T cells enter the central nervous system and orchestrate an immune response against myelin. To understand the initial steps in the pathogenesis of multiple sclerosis, it is important to identify the mechanisms that maintain T-cell tolerance to myelin antigens and to understand how some myelin-specific T cells escape tolerance and what conditions lead to their activation. Central tolerance strongly shapes the peripheral repertoire of myelin-specific T cells, as most myelin-specific T cells are eliminated by clonal deletion in the thymus. Self-reactive T cells that escape central tolerance are generally capable only of low-avidity interactions with antigen-presenting cells. Despite the low avidity of these interactions, peripheral tolerance mechanisms are required to prevent spontaneous autoimmunity. Multiple peripheral tolerance mechanisms for myelin-specific T cells have been indentified, the most important of which appears to be regulatory T cells. While most studies have focused on CD4+ myelin-specific T cells, interesting differences in tolerance mechanisms and the conditions that abrogate these mechanisms have recently been described for CD8+ myelin-specific T cells. PMID:21488900

  7. A web-based decision support tool for prognosis simulation in multiple sclerosis.

    PubMed

    Veloso, Mário

    2014-09-01

    A multiplicity of natural history studies of multiple sclerosis provides valuable knowledge of the disease progression but individualized prognosis remains elusive. A few decision support tools that assist the clinician in such task have emerged but have not received proper attention from clinicians and patients. The objective of the current work is to implement a web-based tool, conveying decision relevant prognostic scientific evidence, which will help clinicians discuss prognosis with individual patients. Data were extracted from a set of reference studies, especially those dealing with the natural history of multiple sclerosis. The web-based decision support tool for individualized prognosis simulation was implemented with NetLogo, a program environment suited for the development of complex adaptive systems. Its prototype has been launched online; it enables clinicians to predict both the likelihood of CIS to CDMS conversion, and the long-term prognosis of disability level and SPMS conversion, as well as assess and monitor the effects of treatment. More robust decision support tools, which convey scientific evidence and satisfy the needs of clinical practice by helping clinicians discuss prognosis expectations with individual patients, are required. The web-based simulation model herein introduced proposes to be a step forward toward this purpose. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Design and development of an injection-molded demultiplexer for optical communication systems in the visible range.

    PubMed

    Höll, S; Haupt, M; Fischer, U H P

    2013-06-20

    Optical simulation software based on the ray-tracing method offers easy and fast results in imaging optics. This method can also be applied in other fields of light propagation. For short distance communications, polymer optical fibers (POFs) are gradually gaining importance. This kind of fiber offers a larger core diameter, e.g., the step index POF features a core diameter of 980 μm. Consequently, POFs have a large number of modes (>3 million modes) in the visible range, and ray tracing could be used to simulate the propagation of light. This simulation method is applicable not only for the fiber itself but also for the key components of a complete POF network, e.g., couplers or other key elements of the transmission line. In this paper a demultiplexer designed and developed by means of ray tracing is presented. Compared to the classical optical design, requirements for optimal design differ particularly with regard to minimizing the insertion loss (IL). The basis of the presented key element is a WDM device using a Rowland spectrometer setup. In this approach the input fiber carries multiple wavelengths, which will be divided into multiple output fibers that transmit only one wavelength. To adapt the basic setup to POF, the guidance of light in this element has to be changed fundamentally. Here, a monolithic approach is presented with a blazed grating using an aspheric mirror to minimize most of the aberrations. In the simulations the POF is represented by an area light source, while the grating is analyzed for different orders and the highest possible efficiency. In general, the element should be designed in a way that it can be produced with a mass production technology like injection molding in order to offer a reasonable price. However, designing the elements with regard to injection molding leads to some inherent challenges. The microstructure of an optical grating and the thick-walled 3D molded parts both result in high demands on the injection molding process. This also requires complex machining of the molding tool. Therefore, different experiments are done to optimize the process parameter, find the best molding material, and find a suitable machining method for the molding tool. The paper will describe the development of the demultiplexer by means of ray-tracing simulations step by step. Also, the process steps and the realized solutions for the injection molding are described.

  9. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    PubMed

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  10. Class-Wide Access to a Commercial Step 1 Question Bank During Preclinical Organ-Based Modules: A Pilot Project.

    PubMed

    Baños, James H; Pepin, Mark E; Van Wagoner, Nicholas

    2018-03-01

    The authors examined the usefulness of a commercially available Step 1 question bank as a formative academic support tool throughout organ-based modules in an integrated preclinical medical curriculum. The authors also determined the extent to which correlation between question bank utilization and academic metrics varied with Medical College Admission Test (MCAT) scores. In 2015, a cohort of 185 first-year medical students at University of Alabama School of Medicine were provided with 18-month full access to a commercially available Step 1 question bank of over 2,100 items throughout organ-based modules, although there were no requirements for use. Data on student use of the question bank were collected via an online administrative portal. Relationships between question bank utilization and academic outcomes including exams, module grades, and United States Medical Licensing Examination (USMLE) Step 1 were determined using multiple linear regression. MCAT scores and number of items attempted in the question bank significantly predicted all academic measures, with question bank utilization as the stronger predictor. The association between question bank utilization and academic outcome was stronger for individuals with lower MCAT scores. The findings elucidate a novel academic support mechanism that, for some programs, may help bridge the gap between holistic and mission-based admissions practices and a residency match process that places a premium on USMLE exam scores. Distributed formative use of USMLE Step 1 practice questions may be of value as an academic support tool that benefits all students, but particularly those entering with lower MCAT scores.

  11. Step-rate cut-points for physical activity intensity in patients with multiple sclerosis: The effect of disability status.

    PubMed

    Agiovlasitis, Stamatis; Sandroff, Brian M; Motl, Robert W

    2016-02-15

    Evaluating the relationship between step-rate and rate of oxygen uptake (VO2) may allow for practical physical activity assessment in patients with multiple sclerosis (MS) of differing disability levels. To examine whether the VO2 to step-rate relationship during over-ground walking differs across varying disability levels among patients with MS and to develop step-rate thresholds for moderate- and vigorous-intensity physical activity. Adults with MS (N=58; age: 51 ± 9 years; 48 women) completed one over-ground walking trial at comfortable speed, one at 0.22 m · s(-1) slower, and one at 0.22 m · s(-1) faster. Each trial lasted 6 min. VO2 was measured with portable spirometry and steps with hand-tally. Disability status was classified as mild, moderate, or severe based on Expanded Disability Status Scale scores. Multi-level regression indicated that step-rate, disability status, and height significantly predicted VO2 (p<0.05). Based on this model, we developed step-rate thresholds for activity intensity that vary by disability status and height. A separate regression without height allowed for development of step-rate thresholds that vary only by disability status. The VO2 during over-ground walking differs among ambulatory patients with MS based on disability level and height, yielding different step-rate thresholds for physical activity intensity. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Multiple active site residues are important for photochemical efficiency in the light-activated enzyme protochlorophyllide oxidoreductase (POR).

    PubMed

    Menon, Binuraj R K; Hardman, Samantha J O; Scrutton, Nigel S; Heyes, Derren J

    2016-08-01

    Protochlorophyllide oxidoreductase (POR) catalyzes the light-driven reduction of protochlorophyllide (Pchlide), an essential, regulatory step in chlorophyll biosynthesis. The unique requirement of the enzyme for light has provided the opportunity to investigate how light energy can be harnessed to power biological catalysis and enzyme dynamics. Excited state interactions between the Pchlide molecule and the protein are known to drive the subsequent reaction chemistry. However, the structural features of POR and active site residues that are important for photochemistry and catalysis are currently unknown, because there is no crystal structure for POR. Here, we have used static and time-resolved spectroscopic measurements of a number of active site variants to study the role of a number of residues, which are located in the proposed NADPH/Pchlide binding site based on previous homology models, in the reaction mechanism of POR. Our findings, which are interpreted in the context of a new improved structural model, have identified several residues that are predicted to interact with the coenzyme or substrate. Several of the POR variants have a profound effect on the photochemistry, suggesting that multiple residues are important in stabilizing the excited state required for catalysis. Our work offers insight into how the POR active site geometry is finely tuned by multiple active site residues to support enzyme-mediated photochemistry and reduction of Pchlide, both of which are crucial to the existence of life on Earth. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Multifunctional microvalves control by optical illumination on nanoheaters and its application in centrifugal microfluidic devices.

    PubMed

    Park, Jong-Myeon; Cho, Yoon-Kyoung; Lee, Beom-Seok; Lee, Jeong-Gun; Ko, Christopher

    2007-05-01

    Valving is critical in microfluidic systems. Among many innovative microvalves used in lab-on-a-chip applications, phase change based microvalves using paraffin wax are particularly attractive for disposable biochip applications because they are simple to implement, cost-effective and biocompatible. However, previously reported paraffin-based valves require embedded microheaters and therefore multi-step operation of many microvalves was a difficult problem. Besides, the operation time was relatively long, 2-10 s. In this paper, we report a unique phase change based microvalve for rapid and versatile operation of multiple microvalves using a single laser diode. The valve is made of nanocomposite materials in which 10 nm-sized iron oxide nanoparticles are dispersed in paraffin wax and used as nanoheaters when excited by laser irradiation. Laser light of relatively weak intensity was able to melt the paraffin wax with the embedded iron oxide nanoparticles, whereas even a very intense laser beam does not melt wax alone. The microvalves are leak-free up to 403.0 +/- 7.6 kPa and the response times to operate both normally closed and normally opened microvalves are less than 0.5 s. Furthermore, a sequential operation of multiple microvalves on a centrifugal microfluidic device using a single laser diode was demonstrated. It showed that the optical control of multiple microvalves is fast, robust, simple to operate, and requires minimal chip space and thus is well suited for fully integrated lab-on-a-chip applications.

  14. Measurements of gluconeogenesis and glycogenolysis: A methodological review

    USDA-ARS?s Scientific Manuscript database

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to deter...

  15. The multiple resource inventory decision-making process

    Treesearch

    Victor A. Rudis

    1993-01-01

    A model of the multiple resource inventory decision-making process is presented that identifies steps in conducting inventories, describes the infrastructure, and points out knowledge gaps that are common to many interdisciplinary studies.Successful efforts to date suggest the need to bridge the gaps by sharing elements, maintain dialogue among stakeholders in multiple...

  16. Interactome analysis of the lymphocytic choriomeningitis virus nucleoprotein in infected cells reveals ATPase Na+/K+ transporting subunit Alpha 1 and prohibitin as host-cell factors involved in the life cycle of mammarenaviruses

    PubMed Central

    Iwasaki, Masaharu; Caì, Yíngyún; de la Torre, Juan C.

    2018-01-01

    Several mammalian arenaviruses (mammarenaviruses) cause hemorrhagic fevers in humans and pose serious public health concerns in their endemic regions. Additionally, mounting evidence indicates that the worldwide-distributed, prototypic mammarenavirus, lymphocytic choriomeningitis virus (LCMV), is a neglected human pathogen of clinical significance. Concerns about human-pathogenic mammarenaviruses are exacerbated by of the lack of licensed vaccines, and current anti-mammarenavirus therapy is limited to off-label use of ribavirin that is only partially effective. Detailed understanding of virus/host-cell interactions may facilitate the development of novel anti-mammarenavirus strategies by targeting components of the host-cell machinery that are required for efficient virus multiplication. Here we document the generation of a recombinant LCMV encoding a nucleoprotein (NP) containing an affinity tag (rLCMV/Strep-NP) and its use to capture the NP-interactome in infected cells. Our proteomic approach combined with genetics and pharmacological validation assays identified ATPase Na+/K+ transporting subunit alpha 1 (ATP1A1) and prohibitin (PHB) as pro-viral factors. Cell-based assays revealed that ATP1A1 and PHB are involved in different steps of the virus life cycle. Accordingly, we observed a synergistic inhibitory effect on LCMV multiplication with a combination of ATP1A1 and PHB inhibitors. We show that ATP1A1 inhibitors suppress multiplication of Lassa virus and Candid#1, a live-attenuated vaccine strain of Junín virus, suggesting that the requirement of ATP1A1 in virus multiplication is conserved among genetically distantly related mammarenaviruses. Our findings suggest that clinically approved inhibitors of ATP1A1, like digoxin, could be repurposed to treat infections by mammarenaviruses pathogenic for humans. PMID:29462184

  17. Cost-effective solutions to maintaining smart grid reliability

    NASA Astrophysics Data System (ADS)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.

  18. 4 Gbps Scalable Low-Voltage Signaling (SLVS) transceiver for pixel radiation detectors

    NASA Astrophysics Data System (ADS)

    Kadlubowski, Lukasz A.; Kmon, Piotr

    2017-08-01

    We report on the design of 4 Gbps Scalable Low-Voltage Signaling (SLVS) transceiver in 40nm CMOS technology for application-specific integrated circuits (ASICs) dedicated to pixel radiation detectors. Serial data are transmitted with +/-200mV differential swing around 200mV nominal common-mode level. The common-mode interference minimization is crucial in such a design, due to EMC requirements. For multi-gigabit-per-second speeds, the influence of power supply path becomes one of the most challenging design issues. Accurate modeling of supply pads at each step of the design is necessary. Our analysis shows that the utilization of multiple bond wires as well as separate power supply pads for bulk terminals connection of the transistors is essential to ensure proper operation of the transceiver. The design is a result of various trade-offs between speed, required operating conditions, common-mode interference as well as power and area consumption.

  19. Confessions of a robot lobotomist

    NASA Technical Reports Server (NTRS)

    Gottshall, R. Marc

    1994-01-01

    Since its inception, numerically controlled (NC) machining methods have been used throughout the aerospace industry to mill, drill, and turn complex shapes by sequentially stepping through motion programs. However, the recent demand for more precision, faster feeds, exotic sensors, and branching execution have existing computer numerical control (CNC) and distributed numerical control (DNC) systems running at maximum controller capacity. Typical disadvantages of current CNC's include fixed memory capacities, limited communication ports, and the use of multiple control languages. The need to tailor CNC's to meet specific applications, whether it be expanded memory, additional communications, or integrated vision, often requires replacing the original controller supplied with the commercial machine tool with a more powerful and capable system. This paper briefly describes the process and equipment requirements for new controllers and their evolutionary implementation in an aerospace environment. The process of controller retrofit with currently available machines is examined, along with several case studies and their computational and architectural implications.

  20. Developing a tool to support diagnostic delivery of dementia.

    PubMed

    Bennett, Claire E; De Boos, Danielle; Moghaddam, Nima G

    2018-01-01

    It is increasingly recognised that there are challenges affecting the current delivery of dementia diagnoses. Steps are required to address this. Current good practice guidelines provide insufficient direction and interventions from other healthcare settings do not appear to fully translate to dementia care settings. This project has taken a sequential two-phase design to developing a tool specific to dementia diagnostic delivery. Interviews with 14 participants explored good diagnostic delivery. Thematic analysis produced key themes (overcoming barriers, navigation of multiple journeys and completing overt and covert tasks) that were used to inform the design of a tool for use by clinicians, patients and companions. The tool was evaluated for acceptability in focused group discussions with 13 participants, which indicated a desire to use the tool and that it could encourage good practice. Adaptations were highlighted and incorporated to improve acceptability. Future research is now required to further evaluate the tool.

  1. What's Measured Is Not Necessarily What Matters: A Cautionary Story from Public Health

    PubMed Central

    Schwartz, Robert

    2016-01-01

    A systematic review of the introduction and use of outcome-based performance management systems for public health organizations found differences between their use as a management system (which requires rigorous definition and measurement to allow comparison across organizational units) versus for improvement (which may require more flexibility). What is included in performance measurement/management systems is influenced by ease of measurement, data quality, ability of organization to control outcomes, ability to measure success in terms of doing things (rather than preventing things) and what is already happening. To the extent that most providers wish to do a good job, the availability of good data to enable benchmarking and improvement is an important step forward. However, to the extent that the health of a population is dependent on multiple factors, many beyond the mandate of the health system, too extensive a reliance on performance measurement may risk unintended consequences of marginalizing critical activities. PMID:28032824

  2. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  3. In vitro culture of lavenders (Lavandula spp.) and the production of secondary metabolites.

    PubMed

    Gonçalves, Sandra; Romano, Anabela

    2013-01-01

    Lavenders (Lavandula spp., Lamiaceae) are aromatic ornamental plants that are used widely in the food, perfume and pharmaceutical industries. The large-scale production of lavenders requires efficient in vitro propagation techniques to avoid the overexploitation of natural populations and to allow the application of biotechnology-based approaches for plant improvement and the production of valuable secondary metabolites. In this review we discuss micropropagation methods that have been developed in several lavender species, mainly based on meristem proliferation and organogenesis. Specific requirements during stages of micropropagation (establishment, shoot multiplication, root induction and acclimatization) and requisites for plant regeneration trough organogenesis, as an important step for the implementation of plant improvement programs, were revised. We also discuss different methods for the in vitro production of valuable secondary metabolites, focusing on the prospects for highly scalable cultures to meet the market demand for lavender-derived products. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. A yeast mutant defective at an early stage in import of secretory protein precursors into the endoplasmic reticulum

    PubMed Central

    1987-01-01

    We have devised a genetic selection for mutant yeast cells that fail to translocate secretory protein precursors into the lumen of the endoplasmic reticulum (ER). Mutant cells are selected by a procedure that requires a signal peptide-containing cytoplasmic enzyme chimera to remain in contact with the cytosol. This approach has uncovered a new secretory mutant, sec61, that is thermosensitive for growth and that accumulates multiple secretory and vacuolar precursor proteins that have not acquired any detectable posttranslational modifications associated with translocation into the ER. Preproteins that accumulate at the sec61 block sediment with the particulate fraction, but are exposed to the cytosol as judged by sensitivity to proteinase K. Thus, the sec61 mutation defines a gene that is required for an early cytoplasmic or ER membrane-associated step in protein translocation. PMID:3305520

  5. Fumed silica nanoparticle mediated biomimicry for optimal cell-material interactions for artificial organ development.

    PubMed

    de Mel, Achala; Ramesh, Bala; Scurr, David J; Alexander, Morgan R; Hamilton, George; Birchall, Martin; Seifalian, Alexander M

    2014-03-01

    Replacement of irreversibly damaged organs due to chronic disease, with suitable tissue engineered implants is now a familiar area of interest to clinicians and multidisciplinary scientists. Ideal tissue engineering approaches require scaffolds to be tailor made to mimic physiological environments of interest with specific surface topographical and biological properties for optimal cell-material interactions. This study demonstrates a single-step procedure for inducing biomimicry in a novel nanocomposite base material scaffold, to re-create the extracellular matrix, which is required for stem cell integration and differentiation to mature cells. Fumed silica nanoparticle mediated procedure of scaffold functionalization, can be potentially adapted with multiple bioactive molecules to induce cellular biomimicry, in the development human organs. The proposed nanocomposite materials already in patients for number of implants, including world first synthetic trachea, tear ducts and vascular bypass graft. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  7. Fair Play Game: A Group Contingency Strategy to Increase Students' Active Behaviours in Physical Education

    ERIC Educational Resources Information Center

    Vidoni, Carla; Lee, Chang-Hung; Azevedo, L. B.

    2014-01-01

    A dependent group contingency strategy called Fair Play Game was applied to promote increase in number of steps during physical education classes for sixth-grade students. Results from a multiple baseline design across three classes showed that the mean number of steps for baseline vs. intervention were: Class 1: 43 vs. 64 steps/minute; Class 2:…

  8. Reliability and Concurrent Validity of the Narrow Path Walking Test in Persons With Multiple Sclerosis.

    PubMed

    Rosenblum, Uri; Melzer, Itshak

    2017-01-01

    About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).

  9. Metabolic pathways as possible therapeutic targets for progressive multiple sclerosis.

    PubMed

    Heidker, Rebecca M; Emerson, Mitchell R; LeVine, Steven M

    2017-08-01

    Unlike relapsing remitting multiple sclerosis, there are very few therapeutic options for patients with progressive forms of multiple sclerosis. While immune mechanisms are key participants in the pathogenesis of relapsing remitting multiple sclerosis, the mechanisms underlying the development of progressive multiple sclerosis are less well understood. Putative mechanisms behind progressive multiple sclerosis have been put forth: insufficient energy production via mitochondrial dysfunction, activated microglia, iron accumulation, oxidative stress, activated astrocytes, Wallerian degeneration, apoptosis, etc . Furthermore, repair processes such as remyelination are incomplete. Experimental therapies that strive to improve metabolism within neurons and glia, e.g. , oligodendrocytes, could act to counter inadequate energy supplies and/or support remyelination. Most experimental approaches have been examined as standalone interventions; however, it is apparent that the biochemical steps being targeted are part of larger pathways, which are further intertwined with other metabolic pathways. Thus, the potential benefits of a tested intervention, or of an established therapy, e.g. , ocrelizumab, could be undermined by constraints on upstream and/or downstream steps. If correct, then this argues for a more comprehensive, multifaceted approach to therapy. Here we review experimental approaches to support neuronal and glial metabolism, and/or promote remyelination, which may have potential to lessen or delay progressive multiple sclerosis.

  10. Distinct Amino Acid Compositional Requirements for Formation and Maintenance of the [PSI+] Prion in Yeast

    PubMed Central

    MacLea, Kyle S.; Paul, Kacy R.; Ben-Musa, Zobaida; Waechter, Aubrey; Shattuck, Jenifer E.; Gruca, Margaret

    2014-01-01

    Multiple yeast prions have been identified that result from the structural conversion of proteins into a self-propagating amyloid form. Amyloid-based prion activity in yeast requires a series of discrete steps. First, the prion protein must form an amyloid nucleus that can recruit and structurally convert additional soluble proteins. Subsequently, maintenance of the prion during cell division requires fragmentation of these aggregates to create new heritable propagons. For the Saccharomyces cerevisiae prion protein Sup35, these different activities are encoded by different regions of the Sup35 prion domain. An N-terminal glutamine/asparagine-rich nucleation domain is required for nucleation and fiber growth, while an adjacent oligopeptide repeat domain is largely dispensable for prion nucleation and fiber growth but is required for chaperone-dependent prion maintenance. Although prion activity of glutamine/asparagine-rich proteins is predominantly determined by amino acid composition, the nucleation and oligopeptide repeat domains of Sup35 have distinct compositional requirements. Here, we quantitatively define these compositional requirements in vivo. We show that aromatic residues strongly promote both prion formation and chaperone-dependent prion maintenance. In contrast, nonaromatic hydrophobic residues strongly promote prion formation but inhibit prion propagation. These results provide insight into why some aggregation-prone proteins are unable to propagate as prions. PMID:25547291

  11. Structural-change localization and monitoring through a perturbation-based inverse problem.

    PubMed

    Roux, Philippe; Guéguen, Philippe; Baillet, Laurent; Hamze, Alaa

    2014-11-01

    Structural-change detection and characterization, or structural-health monitoring, is generally based on modal analysis, for detection, localization, and quantification of changes in structure. Classical methods combine both variations in frequencies and mode shapes, which require accurate and spatially distributed measurements. In this study, the detection and localization of a local perturbation are assessed by analysis of frequency changes (in the fundamental mode and overtones) that are combined with a perturbation-based linear inverse method and a deconvolution process. This perturbation method is applied first to a bending beam with the change considered as a local perturbation of the Young's modulus, using a one-dimensional finite-element model for modal analysis. Localization is successful, even for extended and multiple changes. In a second step, the method is numerically tested under ambient-noise vibration from the beam support with local changes that are shifted step by step along the beam. The frequency values are revealed using the random decrement technique that is applied to the time-evolving vibrations recorded by one sensor at the free extremity of the beam. Finally, the inversion method is experimentally demonstrated at the laboratory scale with data recorded at the free end of a Plexiglas beam attached to a metallic support.

  12. Multi-Mission Automated Task Invocation Subsystem

    NASA Technical Reports Server (NTRS)

    Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.

    2009-01-01

    Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,

  13. Direct Functionalization of an Acid-Terminated Nanodiamond with Azide: Enabling Access to 4-Substituted-1,2,3-Triazole-Functionalized Particles

    DOE PAGES

    Kennedy, Zachary C.; Barrett, Christopher A.; Warner, Marvin G.

    2017-03-01

    Azides on the periphery of nanodiamond materials (ND) are of great utility because they have been shown to undergo Cu-catalyzed and Cu-free cycloaddition reactions with structurally diverse alkynes, affording particles tailored for applications in biology and materials science. However, current methods employed to access ND featuring azide groups typically require either harsh pretreatment procedures or multiple synthesis steps and use surface linking groups that may be susceptible to undesirable cleavage. Here in this paper we demonstrate an alternative single-step approach to producing linker-free, azide-functionalized ND. Our method was applied to low-cost, detonation-derived ND powders where surface carbonyl groups undergo silver-mediatedmore » decarboxylation and radical substitution with azide. ND with directly grafted azide groups were then treated with a variety of aliphatic, aromatic, and fluorescent alkynes to afford 1-(ND)-4-substituted-1,2,3-triazole materials under standard copper-catalyzed cycloaddition conditions. Surface modification steps were verified by characteristic infrared absorptions and elemental analyses. High loadings of triazole surface groups (up to 0.85 mmol g –1) were obtained as determined from thermogravimetric analysis. The azidation procedure disclosed is envisioned to become a valuable initial transformation in numerous future applications of ND.« less

  14. MAAMD: a workflow to standardize meta-analyses and comparison of affymetrix microarray data

    PubMed Central

    2014-01-01

    Background Mandatory deposit of raw microarray data files for public access, prior to study publication, provides significant opportunities to conduct new bioinformatics analyses within and across multiple datasets. Analysis of raw microarray data files (e.g. Affymetrix CEL files) can be time consuming, complex, and requires fundamental computational and bioinformatics skills. The development of analytical workflows to automate these tasks simplifies the processing of, improves the efficiency of, and serves to standardize multiple and sequential analyses. Once installed, workflows facilitate the tedious steps required to run rapid intra- and inter-dataset comparisons. Results We developed a workflow to facilitate and standardize Meta-Analysis of Affymetrix Microarray Data analysis (MAAMD) in Kepler. Two freely available stand-alone software tools, R and AltAnalyze were embedded in MAAMD. The inputs of MAAMD are user-editable csv files, which contain sample information and parameters describing the locations of input files and required tools. MAAMD was tested by analyzing 4 different GEO datasets from mice and drosophila. MAAMD automates data downloading, data organization, data quality control assesment, differential gene expression analysis, clustering analysis, pathway visualization, gene-set enrichment analysis, and cross-species orthologous-gene comparisons. MAAMD was utilized to identify gene orthologues responding to hypoxia or hyperoxia in both mice and drosophila. The entire set of analyses for 4 datasets (34 total microarrays) finished in ~ one hour. Conclusions MAAMD saves time, minimizes the required computer skills, and offers a standardized procedure for users to analyze microarray datasets and make new intra- and inter-dataset comparisons. PMID:24621103

  15. Proteomic analysis of formalin-fixed paraffin embedded tissue by MALDI imaging mass spectrometry

    PubMed Central

    Casadonte, Rita; Caprioli, Richard M

    2012-01-01

    Archived formalin-fixed paraffin-embedded (FFPE) tissue collections represent a valuable informational resource for proteomic studies. Multiple FFPE core biopsies can be assembled in a single block to form tissue microarrays (TMAs). We describe a protocol for analyzing protein in FFPE -TMAs using matrix-assisted laser desorption/ionization (MAL DI) imaging mass spectrometry (IMS). The workflow incorporates an antigen retrieval step following deparaffinization, in situ trypsin digestion, matrix application and then mass spectrometry signal acquisition. The direct analysis of FFPE -TMA tissue using IMS allows direct analysis of multiple tissue samples in a single experiment without extraction and purification of proteins. The advantages of high speed and throughput, easy sample handling and excellent reproducibility make this technology a favorable approach for the proteomic analysis of clinical research cohorts with large sample numbers. For example, TMA analysis of 300 FFPE cores would typically require 6 h of total time through data acquisition, not including data analysis. PMID:22011652

  16. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks

    DOE PAGES

    Shen, Yiwen; Hattink, Maarten; Samadi, Payman; ...

    2018-04-13

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less

  17. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Yiwen; Hattink, Maarten; Samadi, Payman

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less

  18. Assembling programmable FRET-based photonic networks using designer DNA scaffolds

    PubMed Central

    Buckhout-White, Susan; Spillmann, Christopher M; Algar, W. Russ; Khachatrian, Ani; Melinger, Joseph S.; Goldman, Ellen R.; Ancona, Mario G.; Medintz, Igor L.

    2014-01-01

    DNA demonstrates a remarkable capacity for creating designer nanostructures and devices. A growing number of these structures utilize Förster resonance energy transfer (FRET) as part of the device's functionality, readout or characterization, and, as device sophistication increases so do the concomitant FRET requirements. Here we create multi-dye FRET cascades and assess how well DNA can marshal organic dyes into nanoantennae that focus excitonic energy. We evaluate 36 increasingly complex designs including linear, bifurcated, Holliday junction, 8-arm star and dendrimers involving up to five different dyes engaging in four-consecutive FRET steps, while systematically varying fluorophore spacing by Förster distance (R0). Decreasing R0 while augmenting cross-sectional collection area with multiple donors significantly increases terminal exciton delivery efficiency within dendrimers compared with the first linear constructs. Förster modelling confirms that best results are obtained when there are multiple interacting FRET pathways rather than independent channels by which excitons travel from initial donor(s) to final acceptor. PMID:25504073

  19. The Cryptococcus neoformans Capsule: a Sword and a Shield

    PubMed Central

    O'Meara, Teresa R.

    2012-01-01

    Summary: The human fungal pathogen Cryptococcus neoformans is characterized by its ability to induce a distinct polysaccharide capsule in response to a number of host-specific environmental stimuli. The induction of capsule is a complex biological process encompassing regulation at multiple steps, including the biosynthesis, transport, and maintenance of the polysaccharide at the cell surface. By precisely regulating the composition of its cell surface and secreted polysaccharides, C. neoformans has developed intricate ways to establish chronic infection and dormancy in the human host. The plasticity of the capsule structure in response to various host conditions also underscores the complex relationship between host and parasite. Much of this precise regulation of capsule is achieved through the transcriptional responses of multiple conserved signaling pathways that have been coopted to regulate this C. neoformans-specific virulence-associated phenotype. This review focuses on specific host stimuli that trigger the activation of the signal transduction cascades and on the downstream transcriptional responses that are required for robust encapsulation around the cell. PMID:22763631

  20. Trends in computer applications in science assessment

    NASA Astrophysics Data System (ADS)

    Kumar, David D.; Helgeson, Stanley L.

    1995-03-01

    Seven computer applications to science assessment are reviewed. Conventional test administration includes record keeping, grading, and managing test banks. Multiple-choice testing involves forced selection of an answer from a menu, whereas constructed-response testing involves options for students to present their answers within a set standard deviation. Adaptive testing attempts to individualize the test to minimize the number of items and time needed to assess a student's knowledge. Figurai response testing assesses science proficiency in pictorial or graphic mode and requires the student to construct a mental image rather than selecting a response from a multiple choice menu. Simulations have been found useful for performance assessment on a large-scale basis in part because they make it possible to independently specify different aspects of a real experiment. An emerging approach to performance assessment is solution pathway analysis, which permits the analysis of the steps a student takes in solving a problem. Virtually all computer-based testing systems improve the quality and efficiency of record keeping and data analysis.

  1. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    PubMed

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  2. Web-based segmentation and display of three-dimensional radiologic image data.

    PubMed

    Silverstein, J; Rubenstein, J; Millman, A; Panko, W

    1998-01-01

    In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.

  3. Measuring Patient Preferences: An Overview of Methods with a Focus on Discrete Choice Experiments.

    PubMed

    Hazlewood, Glen S

    2018-05-01

    There is increasing recognition of the importance of patient preferences and methodologies to measure them. In this article, methods to quantify patient preferences are reviewed, with a focus on discrete choice experiments. In a discrete choice experiment, patients are asked to choose between 2 or more treatments. The results can be used to quantify the relative importance of treatment outcomes and/or other considerations relevant to medical decision making. Conducting and interpreting a discrete choice experiment requires multiple steps and an understanding of the potential biases that can arise, which we review in this article with examples in rheumatic diseases. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Multi Bus DC-DC Converter in Electric Hybrid Vehicles

    NASA Astrophysics Data System (ADS)

    Krithika, V.; Subramaniam, C.; Sridharan, R.; Geetha, A.

    2018-04-01

    This paper is cotncerned with the design, simulation and fabrication of the prototype of a Multi bus DC- DC converter operating from 42V DC and delivering 14V DC and 260V DC. As a result, three DC buses are interconnected through a single power electronic circuitry. Such a requirement is energized in the development of a hybrid electric automobile which uses the technology of fuel cell. This is implemented by using a Bidirectional DC-DC converter configuration which is ideally suitable for multiple outputs with mutual electrical isolation. For the sake of reduced size and cost of step-up transformer, selection of a high frequency switching cycle at 10 KHz was done.

  5. Generalizing genetical genomics: getting added value from environmental perturbation.

    PubMed

    Li, Yang; Breitling, Rainer; Jansen, Ritsert C

    2008-10-01

    Genetical genomics is a useful approach for studying the effect of genetic perturbations on biological systems at the molecular level. However, molecular networks depend on the environmental conditions and, thus, a comprehensive understanding of biological systems requires studying them across multiple environments. We propose a generalization of genetical genomics, which combines genetic and sensibly chosen environmental perturbations, to study the plasticity of molecular networks. This strategy forms a crucial step toward understanding why individuals respond differently to drugs, toxins, pathogens, nutrients and other environmental influences. Here we outline a strategy for selecting and allocating individuals to particular treatments, and we discuss the promises and pitfalls of the generalized genetical genomics approach.

  6. Tachycardia detection in ICDs by Boston Scientific : Algorithms, pearls, and pitfalls.

    PubMed

    Zanker, Norbert; Schuster, Diane; Gilkerson, James; Stein, Kenneth

    2016-09-01

    The aim of this study was to summarize how implantable cardioverter defibrillators (ICDs) by Boston Scientific sense, detect, discriminate rhythms, and classify episodes. Modern devices include multiple programming selections, diagnostic features, therapy options, memory functions, and device-related history features. Device operation includes logical steps from sensing, detection, discrimination, therapy delivery to history recording. The program is designed to facilitate the application of the device algorithms to the individual patient's clinical needs. Features and functions described in this article represent a selective excerpt by the authors from Boston Scientific publicly available product resources. Programming of ICDs may affect patient outcomes. Patient-adapted and optimized programming requires understanding of device operation and concepts.

  7. Metabolomics: A potential way to know the role of vitamin D on multiple sclerosis.

    PubMed

    Luque-Córdoba, Diego; Luque de Castro, María D

    2017-03-20

    The literature about the influence of vitamin D on multiple sclerosis (MS) is very controversial, possibly as a result of the way through which the research on the subject has been conducted. The studies developed so far have been focused exclusively on gene expression: the effect of a given vitamin D metabolite on target receptors. The influence of the vitamin D status (either natural or after supplementation) on MS has been studied by measurement of the 25 monohydroxylated metabolite (also known as circulating form), despite the 1,25 dihydroxylated metabolite is considered the active form. In the light of the multiple metabolic pathways in which both forms of vitamin D (D 2 and D 3 ) are involved, monitoring of the metabolites is crucial to know the activity of the target enzymes as a function of both the state of the MS patient and the clinical treatment applied. The study of metabolomics aspects is here proposed to clarify the present controversy. In "omics" terms, our proposal is to take profit from up-stream information-thus is, from metabolomics to genomics-with a potential subsequent step to systems biology, if required. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Evaluation of Direct Infusion-Multiple Reaction Monitoring Mass Spectrometry for Quantification of Heat Shock Proteins

    PubMed Central

    Xiang, Yun; Koomen, John M.

    2012-01-01

    Protein quantification with liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM) has emerged as a powerful platform for assessing panels of biomarkers. In this study, direct infusion, using automated, chip-based nanoelectrospray ionization, coupled with MRM (DI-MRM) is used for protein quantification. Removal of the LC separation step increases the importance of evaluating the ratios between the transitions. Therefore, the effects of solvent composition, analyte concentration, spray voltage, and quadrupole resolution settings on fragmentation patterns have been studied using peptide and protein standards. After DI-MRM quantification was evaluated for standards, quantitative assays for the expression of heat shock proteins (HSPs) were translated from LC-MRM to DI-MRM for implementation in cell line models of multiple myeloma. Requirements for DI-MRM assay development are described. Then, the two methods are compared; criteria for effective DI-MRM analysis are reported based on the analysis of HSP expression in digests of whole cell lysates. The increased throughput of DI-MRM analysis is useful for rapid analysis of large batches of similar samples, such as time course measurements of cellular responses to therapy. PMID:22293045

  9. Multiple-robot drug delivery strategy through coordinated teams of microswimmers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kei Cheang, U; Kim, Min Jun, E-mail: mkim@coe.drexel.edu; Lee, Kyoungwoo

    2014-08-25

    Untethered robotic microswimmers are very promising to significantly improve various types of minimally invasive surgeries by offering high accuracy at extremely small scales. A prime example is drug delivery, for which a large number of microswimmers is required to deliver sufficient dosages to target sites. For this reason, the controllability of groups of microswimmers is essential. In this paper, we demonstrate simultaneous control of multiple geometrically similar but magnetically different microswimmers using a single global rotating magnetic field. By exploiting the differences in their magnetic properties, we triggered different swimming behaviors from the microswimmers by controlling the frequency and themore » strength of the global field, for example, one swim and the other does not while exposed to the same control input. Our results show that the balance between the applied magnetic torque and the hydrodynamic torque can be exploited for simultaneous control of two microswimmers to swim in opposite directions, with different velocities, and with similar velocities. This work will serve to establish important concepts for future developments of control systems to manipulate multiple magnetically actuated microswimmers and a step towards using swarms of microswimmers as viable workforces for complex operations.« less

  10. Sensitivity enhancement by multiple-contact cross-polarization under magic-angle spinning.

    PubMed

    Raya, J; Hirschinger, J

    2017-08-01

    Multiple-contact cross-polarization (MC-CP) is applied to powder samples of ferrocene and l-alanine under magic-angle spinning (MAS) conditions. The method is described analytically through the density matrix formalism. The combination of a two-step memory function approach and the Anderson-Weiss approximation is found to be particularly useful to derive approximate analytical solutions for single-contact Hartmann-Hahn CP (HHCP) and MC-CP dynamics under MAS. We show that the MC-CP sequence requiring no pulse-shape optimization yields higher polarizations at short contact times than optimized adiabatic passage through the HH condition CP (APHH-CP) when the MAS frequency is comparable to the heteronuclear dipolar coupling, i.e., when APHH-CP through a single sideband matching condition is impossible or difficult to perform. It is also shown that the MC-CP sideband HH conditions are generally much broader than for single-contact HHCP and that efficient polarization transfer at the centerband HH condition can be reintroduced by rotor-asynchronous multiple equilibrations-re-equilibrations with the proton spin bath. Boundary conditions for the successful use of the MC-CP experiment when relying on spin-lattice relaxation for repolarization are also examined. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Simultaneous entanglement swapping of multiple orbital angular momentum states of light.

    PubMed

    Zhang, Yingwen; Agnew, Megan; Roger, Thomas; Roux, Filippus S; Konrad, Thomas; Faccio, Daniele; Leach, Jonathan; Forbes, Andrew

    2017-09-21

    High-bit-rate long-distance quantum communication is a proposed technology for future communication networks and relies on high-dimensional quantum entanglement as a core resource. While it is known that spatial modes of light provide an avenue for high-dimensional entanglement, the ability to transport such quantum states robustly over long distances remains challenging. To overcome this, entanglement swapping may be used to generate remote quantum correlations between particles that have not interacted; this is the core ingredient of a quantum repeater, akin to repeaters in optical fibre networks. Here we demonstrate entanglement swapping of multiple orbital angular momentum states of light. Our approach does not distinguish between different anti-symmetric states, and thus entanglement swapping occurs for several thousand pairs of spatial light modes simultaneously. This work represents the first step towards a quantum network for high-dimensional entangled states and provides a test bed for fundamental tests of quantum science.Entanglement swapping in high dimensions requires large numbers of entangled photons and consequently suffers from low photon flux. Here the authors demonstrate entanglement swapping of multiple spatial modes of light simultaneously, without the need for increasing the photon numbers with dimension.

  12. Sensitivity enhancement by multiple-contact cross-polarization under magic-angle spinning

    NASA Astrophysics Data System (ADS)

    Raya, J.; Hirschinger, J.

    2017-08-01

    Multiple-contact cross-polarization (MC-CP) is applied to powder samples of ferrocene and L-alanine under magic-angle spinning (MAS) conditions. The method is described analytically through the density matrix formalism. The combination of a two-step memory function approach and the Anderson-Weiss approximation is found to be particularly useful to derive approximate analytical solutions for single-contact Hartmann-Hahn CP (HHCP) and MC-CP dynamics under MAS. We show that the MC-CP sequence requiring no pulse-shape optimization yields higher polarizations at short contact times than optimized adiabatic passage through the HH condition CP (APHH-CP) when the MAS frequency is comparable to the heteronuclear dipolar coupling, i.e., when APHH-CP through a single sideband matching condition is impossible or difficult to perform. It is also shown that the MC-CP sideband HH conditions are generally much broader than for single-contact HHCP and that efficient polarization transfer at the centerband HH condition can be reintroduced by rotor-asynchronous multiple equilibrations-re-equilibrations with the proton spin bath. Boundary conditions for the successful use of the MC-CP experiment when relying on spin-lattice relaxation for repolarization are also examined.

  13. Developing a national dental education research strategy: priorities, barriers and enablers

    PubMed Central

    Barton, Karen L; Dennis, Ashley A; Rees, Charlotte E

    2017-01-01

    Objectives This study aimed to identify national dental education research (DER) priorities for the next 3–5 years and to identify barriers and enablers to DER. Setting Scotland. Participants In this two-stage online questionnaire study, we collected data with multiple dental professions (eg, dentistry, dental nursing and dental hygiene) and stakeholder groups (eg, learners, clinicians, educators, managers, researchers and academics). Eighty-five participants completed the Stage 1 qualitative questionnaire and 649 participants the Stage 2 quantitative questionnaire. Results Eight themes were identified at Stage 1. Of the 24 DER priorities identified, the top three were: role of assessments in identifying competence; undergraduate curriculum prepares for practice and promoting teamwork. Following exploratory factor analysis, the 24 items loaded onto four factors: teamwork and professionalism, measuring and enhancing performance, dental workforce issues and curriculum integration and innovation. Barriers and enablers existed at multiple levels: individual, interpersonal, institutional structures and cultures and technology. Conclusions This priority setting exercise provides a necessary first step to developing a national DER strategy capturing multiple perspectives. Promoting DER requires improved resourcing alongside efforts to overcome peer stigma and lack of valuing and motivation. PMID:28360237

  14. Risk of falls in older people during fast-walking--the TASCOG study.

    PubMed

    Callisaya, M L; Blizzard, L; McGinley, J L; Srikanth, V K

    2012-07-01

    To investigate the relationship between fast-walking and falls in older people. Individuals aged 60-86 years were randomly selected from the electoral roll (n=176). Gait speed, step length, cadence and a walk ratio were recorded during preferred- and fast-walking using an instrumented walkway. Falls were recorded prospectively over 12 months. Log multinomial regression was used to estimate the relative risk of single and multiple falls associated with gait variables during fast-walking and change between preferred- and fast-walking. Covariates included age, sex, mood, physical activity, sensorimotor and cognitive measures. The risk of multiple falls was increased for those with a smaller walk ratio (shorter steps, faster cadence) during fast-walking (RR 0.92, CI 0.87, 0.97) and greater reduction in the walk ratio (smaller increase in step length, larger increase in cadence) when changing to fast-walking (RR 0.73, CI 0.63, 0.85). These gait patterns were associated with poorer physiological and cognitive function (p<0.05). A higher risk of multiple falls was also seen for those in the fastest quarter of gait speed (p=0.01) at fast-walking. A trend for better reaction time, balance, memory and physical activity for higher categories of gait speed was stronger for fallers than non-fallers (p<0.05). Tests of fast-walking may be useful in identifying older individuals at risk of multiple falls. There may be two distinct groups at risk--the frail person with short shuffling steps, and the healthy person exposed to greater risk. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Quartz/fused silica chip carriers

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The primary objective of this research and development effort was to develop monolithic microwave integrated circuit (MMIC) packaging which will operate efficiently at millimeter-wave frequencies. The packages incorporated fused silica as the substrate material which was selected due to its favorable electrical properties and potential performance improvement over more conventional materials for Ka-band operation. The first step towards meeting this objective is to develop a package that meets standard mechanical and thermal requirements using fused silica and to be compatible with semiconductor devices operating up to at least 44 GHz. The second step is to modify the package design and add multilayer and multicavity capacity to allow for application specific integrated circuits (ASIC's) to control multiple phase shifters. The final step is to adapt the package design to a phased array module with integral radiating elements. The first task was a continuation of the SBIR Phase 1 work. Phase 1 identified fused silica as a viable substrate material by demonstrating various plating, machining, and adhesion properties. In Phase 2 Task 1, a package was designed and fabricated to validate these findings. Task 2 was to take the next step in packaging and fabricate a multilayer, multichip module (MCM). This package is the predecessor to the phased array module and demonstrates the ability to via fill, circuit print, laminate, and to form vertical interconnects. The final task was to build a phased array module. The radiating elements were to be incorporated into the package instead of connecting to it with wire or ribbon bonds.

  16. Qualis-SIS: automated standard curve generation and quality assessment for multiplexed targeted quantitative proteomic experiments with labeled standards.

    PubMed

    Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H

    2015-02-06

    Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.

  17. Functional analysis of aromatic biosynthetic pathways in Pseudomonas putida KT2440

    PubMed Central

    Molina‐Henares, M. Antonia; García‐Salamanca, Adela; Molina‐Henares, A. Jesús; De La Torre, Jesús; Herrera, M. Carmen; Ramos, Juan L.; Duque, Estrella

    2009-01-01

    Summary Pseudomonas putida KT2440 is a non‐pathogenic prototrophic bacterium with high potential for biotechnological applications. Despite all that is known about this strain, the biosynthesis of essential chemicals has not been fully analysed and auxotroph mutants are scarce. We carried out massive mini‐Tn5 random mutagenesis and screened for auxotrophs that require aromatic amino acids. The biosynthesis of aromatic amino acids was analysed in detail including physical and transcriptional organization of genes, complementation assays and feeding experiments to establish pathway intermediates. There is a single pathway from chorismate leading to the biosynthesis of tryptophan, whereas the biosynthesis of phenylalanine and tyrosine is achieved through multiple convergent pathways. Genes for tryptophan biosynthesis are grouped in unlinked regions with the trpBA and trpGDE genes organized as operons and the trpI, trpE and trpF genes organized as single transcriptional units. The pheA and tyrA gene‐encoding multifunctional enzymes for phenylalanine and tyrosine biosynthesis are linked in the chromosome and form an operon with the serC gene involved in serine biosynthesis. The last step in the biosynthesis of these two amino acids requires an amino transferase activity for which multiple tyrB‐like genes are present in the host chromosome. PMID:21261884

  18. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  19. A comparison of the effects of visual deprivation and regular body weight support treadmill training on improving over-ground walking of stroke patients: a multiple baseline single subject design.

    PubMed

    Kim, Jeong-Soo; Kang, Sun-Young; Jeon, Hye-Seon

    2015-01-01

    The body-weight-support treadmill (BWST) is commonly used for gait rehabilitation, but other forms of BWST are in development, such as visual-deprivation BWST (VDBWST). In this study, we compare the effect of VDBWST training and conventional BWST training on spatiotemporal gait parameters for three individuals who had hemiparetic strokes. We used a single-subject experimental design, alternating multiple baselines across the individuals. We recruited three individuals with hemiparesis from stroke; two on the left side and one on the right. For the main outcome measures we assessed spatiotemporal gait parameters using GAITRite, including: gait velocity; cadence; step time of the affected side (STA); step time of the non-affected side (STN); step length of the affected side (SLA); step length of the non-affected side (SLN); step-time asymmetry (ST-asymmetry); and step-length asymmetry (SL-asymmetry). Gait velocity, cadence, SLA, and SLN increased from baseline after both interventions, but STA, ST-asymmetry, and SL-asymmetry decreased from the baseline after the interventions. The VDBWST was significantly more effective than the BWST for increasing gait velocity and cadence and for decreasing ST-asymmetry. VDBWST is more effective than BWST for improving gait performance during the rehabilitation for ground walking.

  20. Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.

    ERIC Educational Resources Information Center

    Bullard, John R.; Mether, Calvin E.

    A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…

  1. On-chip human microvasculature assay for visualization and quantitation of tumor cell extravasation dynamics

    PubMed Central

    Chen, Michelle B.; Whisler, Jordan A.; Fröse, Julia; Yu, Cathy; Shin, Yoojin

    2017-01-01

    Distant metastasis, which results in >90% of cancer related deaths, is enabled by hematogenous dissemination of tumor cells via the circulation. This requires the completion of a sequence of complex steps including transit, initial arrest, extravasation, survival and proliferation. Increased understanding of the cellular and molecular players enabling each of these steps is key in uncovering new opportunities for therapeutic intervention during early metastatic dissemination. Here, we describe an in vitro model of the human microcirculation with the potential to recapitulate discrete steps of early metastatic seeding, including arrest, transendothelial migration and early micrometastases formation. The microdevice features self-organized human microvascular networks formed over 4–5 days, after which tumor can be perfused and extravasation events easily tracked over 72 hours, via standard confocal microscopy. Contrary to most in vivo and in vitro extravasation assays, robust and rapid scoring of extravascular cells combined with high-resolution imaging can be easily achieved due to the confinement of the vascular network to one plane close to the surface of the device. This renders extravascular cells clearly distinct and allows tumor cells of interest to be identified quickly compared to those in thick tissues. The ability to generate large numbers of devices (~36) per experiment coupled with fast quantitation further allows for highly parametric studies, which is required when testing multiple genetic or pharmacological perturbations. This is coupled with the capability for live tracking of single cell extravasation events allowing both tumor and endothelial morphological dynamics to be observed in high detail with a moderate number of data points. This Protocol Extension describes an adaptation of an existing Protocol describing a microfluidic platform that offers additional applications. PMID:28358393

  2. A methodology for evaluating organizational change in community-based chronic disease interventions.

    PubMed

    Hanni, Krista D; Mendoza, Elsa; Snider, John; Winkleby, Marilyn A

    2007-10-01

    In 2003, the Monterey County Health Department, serving Salinas, California, was awarded one of 12 grants from the Steps to a HealthierUS Program to implement a 5-year, multiple-intervention community approach to reduce diabetes, asthma, and obesity. National adult and youth surveys to assess long-term outcomes are required by all Steps sites; however, site-specific surveys to assess intermediate outcomes are not required. Salinas is a medically underserved community of primarily Mexican American residents with high obesity rates and other poor health outcomes. The health department's Steps program has partnered with traditional organizations such as schools, senior centers, clinics, and faith-based organizations as well as novel organizations such as employers of agricultural workers and owners of taquerias. The health department and the Stanford Prevention Research Center developed new site-specific, community-focused partner surveys to assess intermediate outcomes to augment the nationally mandated surveys. These site-specific surveys will evaluate changes in organizational practices, policies, or both following the socioecological model, specifically the Spectrum of Prevention. Our site-specific partner surveys helped to 1) identify promising new partners, select initial partners from neighborhoods with the greatest financial need, and identify potentially successful community approaches; and 2) provide data for evaluating intermediate outcomes matched to national long-term outcomes so that policy and organizational level changes could be assessed. These quantitative surveys also provide important context-specific qualitative data, identifying opportunities for strengthening community partnerships. Developing site-specific partner surveys in multisite intervention studies can provide important data to guide local program efforts and assess progress toward intermediate outcomes matched to long-term outcomes from nationally mandated surveys.

  3. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  4. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  5. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  6. Minimal requirement for induction of natural cytotoxicity and intersection of activation signals by inhibitory receptors.

    PubMed

    Bryceson, Yenan T; Ljunggren, Hans-Gustaf; Long, Eric O

    2009-09-24

    Natural killer (NK) cells provide innate control of infected and neoplastic cells. Multiple receptors have been implicated in natural cytotoxicity, but their individual contribution remains unclear. Here, we studied the activation of primary, resting human NK cells by Drosophila cells expressing ligands for receptors NKG2D, DNAM-1, 2B4, CD2, and LFA-1. Each receptor was capable of inducing inside-out signals for LFA-1, promoting adhesion, but none induced degranulation. Rather, release of cytolytic granules required synergistic activation through coengagement of receptors, shown here for NKG2D and 2B4. Although engagement of NKG2D and 2B4 was not sufficient for strong target cell lysis, collective engagement of LFA-1, NKG2D, and 2B4 defined a minimal requirement for natural cytotoxicity. Remarkably, inside-out signaling induced by each one of these receptors, including LFA-1, was inhibited by receptor CD94/NKG2A binding to HLA-E. Strong inside-out signals induced by the combination of NKG2D and 2B4 or by CD16 could overcome CD94/NKG2A inhibition. In contrast, degranulation induced by these receptors was still subject to inhibition by CD94/NKG2A. These results reveal multiple layers in the activation pathway for natural cytotoxicity and that steps as distinct as inside-out signaling to LFA-1 and signals for granule release are sensitive to inhibition by CD94/NKG2A.

  7. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE PAGES

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...

    2016-03-01

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  8. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  9. Self-Assembly through Noncovalent Preorganization of Reactants: Explaining the Formation of a Polyfluoroxometalate.

    PubMed

    Schreiber, Roy E; Avram, Liat; Neumann, Ronny

    2018-01-09

    High-order elementary reactions in homogeneous solutions involving more than two molecules are statistically improbable and very slow to proceed. They are not generally considered in classical transition-state or collision theories. Yet, rather selective, high-yield product formation is common in self-assembly processes that require many reaction steps. On the basis of recent observations of crystallization as well as reactions in dense phases, it is shown that self-assembly can occur by preorganization of reactants in a noncovalent supramolecular assembly, whereby directing forces can lead to an apparent one-step transformation of multiple reactants. A simple and general kinetic model for multiple reactant transformation in a dense phase that can account for many-bodied transformations was developed. Furthermore, the self-assembly of polyfluoroxometalate anion [H 2 F 6 NaW 18 O 56 ] 7- from simple tungstate Na 2 WO 2 F 4 was demonstrated by using 2D 19 F- 19 F NOESY, 2D 19 F- 19 F COSY NMR spectroscopy, a new 2D 19 F{ 183 W} NMR technique, as well as ESI-MS and diffusion NMR spectroscopy, and the crucial involvement of a supramolecular assembly was found. The deterministic kinetic reaction model explains the reaction in a dense phase and supports the suggested self-assembly mechanism. Reactions in dense phases may be of general importance in understanding other self-assembly reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Multi-profile Bayesian alignment model for LC-MS data analysis with integration of internal standards

    PubMed Central

    Tsai, Tsung-Heng; Tadesse, Mahlet G.; Di Poto, Cristina; Pannell, Lewis K.; Mechref, Yehia; Wang, Yue; Ressom, Habtom W.

    2013-01-01

    Motivation: Liquid chromatography-mass spectrometry (LC-MS) has been widely used for profiling expression levels of biomolecules in various ‘-omic’ studies including proteomics, metabolomics and glycomics. Appropriate LC-MS data preprocessing steps are needed to detect true differences between biological groups. Retention time (RT) alignment, which is required to ensure that ion intensity measurements among multiple LC-MS runs are comparable, is one of the most important yet challenging preprocessing steps. Current alignment approaches estimate RT variability using either single chromatograms or detected peaks, but do not simultaneously take into account the complementary information embedded in the entire LC-MS data. Results: We propose a Bayesian alignment model for LC-MS data analysis. The alignment model provides estimates of the RT variability along with uncertainty measures. The model enables integration of multiple sources of information including internal standards and clustered chromatograms in a mathematically rigorous framework. We apply the model to LC-MS metabolomic, proteomic and glycomic data. The performance of the model is evaluated based on ground-truth data, by measuring correlation of variation, RT difference across runs and peak-matching performance. We demonstrate that Bayesian alignment model improves significantly the RT alignment performance through appropriate integration of relevant information. Availability and implementation: MATLAB code, raw and preprocessed LC-MS data are available at http://omics.georgetown.edu/alignLCMS.html Contact: hwr@georgetown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24013927

  11. [Local ice application in therapy of kinetic limb ataxia. Clinical assessment of positive treatment effects in patients with multiple sclerosis].

    PubMed

    Albrecht, H; Schwecht, M; Pöllmann, W; Parag, D; Erasmus, L P; König, N

    1998-12-01

    Upper limb ataxia is one of the most disabling symptoms of patients with multiple sclerosis (MS). There are some clinically tested therapeutic strategies, especially with regard to cerebellar tremor. But most of the methods used for treatment of limb ataxia in physiotherapy and occupational therapy are not systematically evaluated, e.g. the effect of local ice applications, as reported by MS patients and therapists, respectively. We investigated 21 MS patients before and in several steps 1 up to 45 min after cooling the most affected forearm. We used a series of 6 tests, including parts of neurological status and activities of daily living as well. At each step skin temperature and nerve conduction velocity were recorded. All tests were documented by video for later offline analysis. Standardized evaluation was done by the investigators and separately by an independent second team, both of them using numeric scales for quality of performance. After local cooling all patients showed a positive effect, especially a reduction of intentional tremor. In most cases this effect lasted 45 min, in some patients even longer. We presume that a decrease in the proprioceptive afferent inflow-induced by cooling-may be the probable cause of this reduction of cerebellar tremor. Patients can use ice applications as a method of treating themselves when a short-time reduction of intention tremor is required, e.g. for typing, signing or self-catheterization.

  12. Multiple-step preparation and physicochemical characterization of crystalline α-germanium hydrogenphosphate

    NASA Astrophysics Data System (ADS)

    Romano, Ricardo; Ruiz, Ana I.; Alves, Oswaldo L.

    2004-04-01

    The reaction between germanium oxide and phosphoric acid has previously been described and led to impure germanium hydrogenphosphate samples with low crystallinity. A new multiple-step route involving the same reaction under refluxing and soft hydrothermal conditions is described for the preparation of pure and crystalline α-GeP. The physicochemical characterization of the samples allows accompaniment of the reaction evolution as well as determining short- and long-range structural organization. The phase purity of the α-GeP sample was confirmed by applying Rietveld's profile analysis, which also determined the cell parameters of its crystals.

  13. A facile single-step synthesis of ternary multicore magneto-plasmonic nanoparticles.

    PubMed

    Benelmekki, Maria; Bohra, Murtaza; Kim, Jeong-Hwan; Diaz, Rosa E; Vernieres, Jerome; Grammatikopoulos, Panagiotis; Sowwan, Mukhles

    2014-04-07

    We report a facile single-step synthesis of ternary hybrid nanoparticles (NPs) composed of multiple dumbbell-like iron-silver (FeAg) cores encapsulated by a silicon (Si) shell using a versatile co-sputter gas-condensation technique. In comparison to previously reported binary magneto-plasmonic NPs, the advantage conferred by a Si shell is to bind the multiple magneto-plasmonic (FeAg) cores together and prevent them from aggregation at the same time. Further, we demonstrate that the size of the NPs and number of cores in each NP can be modulated over a wide range by tuning the experimental parameters.

  14. An easy-to-use calculating machine to simulate steady state and non-steady-state preparative separations by multiple dual mode counter-current chromatography with semi-continuous loading of feed mixtures.

    PubMed

    Kostanyan, Artak E; Shishilov, Oleg N

    2018-06-01

    Multiple dual mode counter-current chromatography (MDM CCC) separation processes with semi-continuous large sample loading consist of a succession of two counter-current steps: with "x" phase (first step) and "y" phase (second step) flow periods. A feed mixture dissolved in the "x" phase is continuously loaded into a CCC machine at the beginning of the first step of each cycle over a constant time with the volumetric rate equal to the flow rate of the pure "x" phase. An easy-to-use calculating machine is developed to simulate the chromatograms and the amounts of solutes eluted with the phases at each cycle for steady-state (the duration of the flow periods of the phases is kept constant for all the cycles) and non-steady-state (with variable duration of alternating phase elution steps) separations. Using the calculating machine, the separation of mixtures containing up to five components can be simulated and designed. Examples of the application of the calculating machine for the simulation of MDM CCC processes are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  16. Implied alignment: a synapomorphy-based multiple-sequence alignment method and its use in cladogram search

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  17. Stability of the neurotensin receptor NTS1 free in detergent solution and immobilized to affinity resin.

    PubMed

    White, Jim F; Grisshammer, Reinhard

    2010-09-07

    Purification of recombinant membrane receptors is commonly achieved by use of an affinity tag followed by an additional chromatography step if required. This second step may exploit specific receptor properties such as ligand binding. However, the effects of multiple purification steps on protein yield and integrity are often poorly documented. We have previously reported a robust two-step purification procedure for the recombinant rat neurotensin receptor NTS1 to give milligram quantities of functional receptor protein. First, histidine-tagged receptors are enriched by immobilized metal affinity chromatography using Ni-NTA resin. Second, remaining contaminants in the Ni-NTA column eluate are removed by use of a subsequent neurotensin column yielding pure NTS1. Whilst the neurotensin column eluate contained functional receptor protein, we observed in the neurotensin column flow-through misfolded NTS1. To investigate the origin of the misfolded receptors, we estimated the amount of functional and misfolded NTS1 at each purification step by radio-ligand binding, densitometry of Coomassie stained SDS-gels, and protein content determination. First, we observed that correctly folded NTS1 suffers damage by exposure to detergent and various buffer compositions as seen by the loss of [(3)H]neurotensin binding over time. Second, exposure to the neurotensin affinity resin generated additional misfolded receptor protein. Our data point towards two ways by which misfolded NTS1 may be generated: Damage by exposure to buffer components and by close contact of the receptor to the neurotensin affinity resin. Because NTS1 in detergent solution is stabilized by neurotensin, we speculate that the occurrence of aggregated receptor after contact with the neurotensin resin is the consequence of perturbations in the detergent belt surrounding the NTS1 transmembrane core. Both effects reduce the yield of functional receptor protein.

  18. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  19. The use of poly-cation oxides to lower the temperature of two-step thermochemical water splitting

    DOE PAGES

    Zhai, Shang; Rojas, Jimmy; Ahlborg, Nadia; ...

    2018-01-01

    We report the discovery of a new class of oxides – poly-cation oxides (PCOs) – that consist of multiple cations and can thermochemically split water in a two-step cycle to produce hydrogen (H 2 ) and oxygen (O 2 ).

  20. Applied Missing Data Analysis. Methodology in the Social Sciences Series

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2010-01-01

    Walking readers step by step through complex concepts, this book translates missing data techniques into something that applied researchers and graduate students can understand and utilize in their own research. Enders explains the rationale and procedural details for maximum likelihood estimation, Bayesian estimation, multiple imputation, and…

  1. Programming 2D/3D shape-shifting with hobbyist 3D printers† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7mh00269f

    PubMed Central

    van Manen, Teunis; Janbaz, Shahram

    2017-01-01

    Materials and devices with advanced functionalities often need to combine complex 3D shapes with functionality-inducing surface features. Precisely controlled bio-nanopatterns, printed electronic components, and sensors/actuators are all examples of such surface features. However, the vast majority of the refined technologies that are currently available for creating functional surface features work only on flat surfaces. Here we present initially flat constructs that upon triggering by high temperatures change their shape to a pre-programmed 3D shape, thereby enabling the combination of surface-related functionalities with complex 3D shapes. A number of shape-shifting materials have been proposed during the last few years based on various types of advanced technologies. The proposed techniques often require multiple fabrication steps and special materials, while being limited in terms of the 3D shapes they could achieve. The approach presented here is a single-step printing process that requires only a hobbyist 3D printer and inexpensive off-the-shelf materials. It also lends itself to a host of design strategies based on self-folding origami, instability-driven pop-up, and ‘sequential’ shape-shifting to unprecedentedly expand the space of achievable 3D shapes. This combination of simplicity and versatility is a key to widespread applications. PMID:29308207

  2. The South African Tuberculosis Care Cascade: Estimated Losses and Methodological Challenges

    PubMed Central

    Naidoo, Pren; Theron, Grant; Rangaka, Molebogeng X; Chihota, Violet N; Vaughan, Louise; Brey, Zameer O; Pillay, Yogan

    2017-01-01

    Abstract Background While tuberculosis incidence and mortality are declining in South Africa, meeting the goals of the End TB Strategy requires an invigorated programmatic response informed by accurate data. Enumerating the losses at each step in the care cascade enables appropriate targeting of interventions and resources. Methods We estimated the tuberculosis burden; the number and proportion of individuals with tuberculosis who accessed tests, had tuberculosis diagnosed, initiated treatment, and successfully completed treatment for all tuberculosis cases, for those with drug-susceptible tuberculosis (including human immunodeficiency virus (HIV)–coinfected cases) and rifampicin-resistant tuberculosis. Estimates were derived from national electronic tuberculosis register data, laboratory data, and published studies. Results The overall tuberculosis burden was estimated to be 532005 cases (range, 333760–764480 cases), with successful completion of treatment in 53% of cases. Losses occurred at multiple steps: 5% at test access, 13% at diagnosis, 12% at treatment initiation, and 17% at successful treatment completion. Overall losses were similar among all drug-susceptible cases and those with HIV coinfection (54% and 52%, respectively, successfully completed treatment). Losses were substantially higher among rifampicin- resistant cases, with only 22% successfully completing treatment. Conclusion Although the vast majority of individuals with tuberculosis engaged the public health system, just over half were successfully treated. Urgent efforts are required to improve implementation of existing policies and protocols to close gaps in tuberculosis diagnosis, treatment initiation, and successful treatment completion. PMID:29117342

  3. Discovery through maps: Exploring real-world applications of ...

    EPA Pesticide Factsheets

    Background/Question/Methods U.S. EPA’s EnviroAtlas provides a collection of interactive tools and resources for exploring ecosystem goods and services. The purpose of EnviroAtlas is to provide better access to consistently derived ecosystems and socio-economic data to facilitate decision-making while also providing data for research and education. EnviroAtlas tools and resources are well-suited for educational use, as they encourage systems thinking, cover a broad range of topics, are freely available, and do not require specialized software to use. To use EnviroAtlas only requires a computer and an internet connection, making it a useful tool for community planning, education, and decision-making at multiple scales. To help users understand how EnviroAtlas resources may be used in different contexts, we provide example use cases. These use cases highlight a real-world issue which EnviroAtlas data, in conjunction with other available data or resources, may be used to address. Here we present three use cases that approach incorporating ecosystem services in decision-making in different decision contexts: 1) to minimize the negative impacts of excessive summer heat due to urbanization in Portland, Oregon 2) to explore selecting a pilot route for a community greenway, and 3) to reduce nutrient loading through a regional manure transport program. Results/Conclusions EnviroAtlas use cases provide step-by-step approaches for using maps and data to address real-wo

  4. Simple image-based no-wash method for quantitative detection of surface expressed CFTR

    PubMed Central

    Larsen, Mads Breum; Hu, Jennifer; Frizzell, Raymond A.; Watkins, Simon C.

    2016-01-01

    Cystic fibrosis (CF) is the most common lethal genetic disease among Caucasians. It is caused by mutations in the CF Transmembrane Conductance Regulator (CFTR) gene, which encodes an apical membrane anion channel that is required for regulating the volume and composition of epithelial secretions. The most common CFTR mutation, present on at least one allele in >90% of CF patients, deletes phenylalanine at position 508 (F508del), which causes the protein to misfold. Endoplasmic reticulum (ER) quality control elicits the degradation of mutant CFTR, compromising its trafficking to the epithelial cell apical membrane. The absence of functional CFTR leads to depletion of airway surface liquid, impaired clearance of mucus and bacteria from the lung, and predisposes to recurrent infections. Ultimately, respiratory failure results from inflammation and bronchiectasis. Although high throughput screening has identified small molecules that can restore the anion transport function of F508del CFTR, they correct less than 15% of WT CFTR activity, yielding insufficient clinical benefit. To date, most primary CF drug discovery assays have employed measurements of CFTR’s anion transport function, a method that depends on the recruitment of a functional CFTR to the cell surface, involves multiple wash steps, and relies on a signal that saturates rapidly. Screening efforts have also included assays for detection of extracellularly HA-tagged or HRP-tagged CFTR, which require multiple washing steps. We have recently developed tools and cell lines that report the correction of mutant CFTR trafficking by currently available small molecules, and have extended this assay to the 96-well format. This new and simple no-wash assay of F508del CFTR at the cell surface may permit the discovery of more efficacious drugs, and hopefully thereby prevent the catastrophic effects of this disease. In addition, the modular design of this platform should make it useful for other diseases where loss-of-function results from folding and/or trafficking defects in membrane proteins. PMID:26361332

  5. Reinventing User Applications for Mission Control

    NASA Technical Reports Server (NTRS)

    Trimble, Jay Phillip; Crocker, Alan R.

    2010-01-01

    In 2006, NASA Ames Research Center's (ARC) Intelligent Systems Division, and NASA Johnson Space Centers (JSC) Mission Operations Directorate (MOD) began a collaboration to move user applications for JSC's mission control center to a new software architecture, intended to replace the existing user applications being used for the Space Shuttle and the International Space Station. It must also carry NASA/JSC mission operations forward to the future, meeting the needs for NASA's exploration programs beyond low Earth orbit. Key requirements for the new architecture, called Mission Control Technologies (MCT) are that end users must be able to compose and build their own software displays without the need for programming, or direct support and approval from a platform services organization. Developers must be able to build MCT components using industry standard languages and tools. Each component of MCT must be interoperable with other components, regardless of what organization develops them. For platform service providers and MOD management, MCT must be cost effective, maintainable and evolvable. MCT software is built from components that are presented to users as composable user objects. A user object is an entity that represents a domain object such as a telemetry point, a command, a timeline, an activity, or a step in a procedure. User objects may be composed and reused, for example a telemetry point may be used in a traditional monitoring display, and that same telemetry user object may be composed into a procedure step. In either display, that same telemetry point may be shown in different views, such as a plot, an alpha numeric, or a meta-data view and those views may be changed live and in place. MCT presents users with a single unified user environment that contains all the objects required to perform applicable flight controller tasks, thus users do not have to use multiple applications, the traditional boundaries that exist between multiple heterogeneous applications disappear, leaving open the possibility of new operations concepts that are not constrained by the traditional applications paradigm.

  6. Evaluation of a shared-work program for reducing assistance provided to supported workers with severe multiple disabilities.

    PubMed

    Parsons, Marsha B; Reid, Dennis H; Green, Carolyn W; Browning, Leah B; Hensley, Mary B

    2002-01-01

    Concern has been expressed recently regarding the need to enhance the performance of individuals with highly significant disabilities in community-based, supported jobs. We evaluated a shared-work program for reducing job coach assistance provided to three workers with severe multiple disabilities in a publishing company. Following systematic observations of the assistance provided as each worker worked on entire job tasks, steps comprising the tasks were then re-assigned across workers. The re-assignment involved assigning each worker only those task steps for which the respective worker received the least amount of assistance (e.g., re-assigning steps that a worker could not complete due to physical disabilities), and ensuring the entire tasks were still completed by combining steps performed by all three workers. The shared-work program was accompanied by reductions in job coach assistance provided to each worker. Work productivity of the supported workers initially decreased but then increased to a level equivalent to the higher ranges of baseline productivity. These results suggested that the shared-work program appears to represent a viable means of enhancing supported work performance of people with severe multiple disabilities in some types of community jobs. Future research needs discussed focus on evaluating shared-work approaches with other jobs, and developing additional community work models specifically for people with highly significant disabilities.

  7. Step-Growth Polymerization.

    ERIC Educational Resources Information Center

    Stille, J. K.

    1981-01-01

    Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)

  8. Investigation of the Evolution of Crystal Size and Shape during Temperature Cycling and in the Presence of a Polymeric Additive Using Combined Process Analytical Technologies

    PubMed Central

    2017-01-01

    Crystal size and shape can be manipulated to enhance the qualities of the final product. In this work the steady-state shape and size of succinic acid crystals, with and without a polymeric additive (Pluronic P123) at 350 mL, scale is reported. The effect of the amplitude of cycles as well as the heating/cooling rates is described, and convergent cycling (direct nucleation control) is compared to static cycling. The results show that the shape of succinic acid crystals changes from plate- to diamond-like after multiple cycling steps, and that the time required for this morphology change to occur is strongly related to the type of cycling. Addition of the polymer is shown to affect both the final shape of the crystals and the time needed to reach size and shape steady-state conditions. It is shown how this phenomenon can be used to improve the design of the crystallization step in order to achieve more efficient downstream operations and, in general, to help optimize the whole manufacturing process. PMID:28867966

  9. Harmonic source wavefront aberration correction for ultrasound imaging

    PubMed Central

    Dianis, Scott W.; von Ramm, Olaf T.

    2011-01-01

    A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031

  10. Plasma Jet Printing and in Situ Reduction of Highly Acidic Graphene Oxide.

    PubMed

    Dey, Avishek; Krishnamurthy, Satheesh; Bowen, James; Nordlund, Dennis; Meyyappan, M; Gandhiraman, Ram P

    2018-05-23

    Miniaturization of electronic devices and the advancement of Internet of Things pose exciting challenges to develop technologies for patterned deposition of functional nanomaterials. Printed and flexible electronic devices and energy storage devices can be embedded onto clothing or other flexible surfaces. Graphene oxide (GO) has gained much attention in printed electronics due its solution processability, robustness, and high electrical conductivity in the reduced state. Here, we introduce an approach to print GO films from highly acidic suspensions with in situ reduction using an atmospheric pressure plasma jet. Low-temperature plasma of a He and H 2 mixture was used successfully to reduce a highly acidic GO suspension (pH < 2) in situ during deposition. This technique overcomes the multiple intermediate steps required to increase the conductivity of deposited GO. X-ray spectroscopic studies confirmed that the reaction intermediates and the concentration of oxygen functionalities bonded to GO have been reduced significantly by this approach without any additional steps. Moreover, the reduced GO films showed enhanced conductivity. Hence, this technique has a strong potential for printing conducting patterns of GO for a range of large-scale applications.

  11. Automation in biological crystallization.

    PubMed

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  12. Automation in biological crystallization

    PubMed Central

    Shaw Stewart, Patrick; Mueller-Dieckmann, Jochen

    2014-01-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given. PMID:24915074

  13. Ergonomic analyses within the French transport and logistics sector: first steps towards a new "act elsewhere" prevention approach.

    PubMed

    Wioland, Liên

    2013-10-01

    Statistics from the French Employee National Health Insurance Fund indicate high accident levels in the transport sector. This study represents initial thinking on a new approach to transport sector prevention based on the assumption that a work situation could be improved by acting on another interconnected work situation. Ergonomic analysis of two connected work situations, involving the road haulage drivers and cross-docking platform employees, was performed to test this assumption. Our results show that drivers are exposed to a number of identified risks, but their multiple tasks raise the question of activity intensification. The conditions, under which the drivers will perform their work and take to the road, are partly determined by the quality and organisation of the platform with which they interact. We make a number of recommendations (e.g. changing handling equipment, re-appraising certain jobs) to improve platform organisation and employee working conditions with the aim of also improving driver conditions. These initial steps in this prevention approach appear promising, but more detailed investigation is required. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Interactions of double patterning technology with wafer processing, OPC and design flows

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Cork, Chris; Miloslavsky, Alex; Luk-Pat, Gerry; Barnes, Levi; Hapli, John; Lewellen, John; Rollins, Greg; Wiaux, Vincent; Verhaegen, Staf

    2008-03-01

    Double patterning technology (DPT) is one of the main options for printing logic devices with half-pitch less than 45nm; and flash and DRAM memory devices with half-pitch less than 40nm. DPT methods decompose the original design intent into two individual masking layers which are each patterned using single exposures and existing 193nm lithography tools. The results of the individual patterning layers combine to re-create the design intent pattern on the wafer. In this paper we study interactions of DPT with lithography, masks synthesis and physical design flows. Double exposure and etch patterning steps create complexity for both process and design flows. DPT decomposition is a critical software step which will be performed in physical design and also in mask synthesis. Decomposition includes cutting (splitting) of original design intent polygons into multiple polygons where required; and coloring of the resulting polygons. We evaluate the ability to meet key physical design goals such as: reduce circuit area; minimize rework; ensure DPT compliance; guarantee patterning robustness on individual layer targets; ensure symmetric wafer results; and create uniform wafer density for the individual patterning layers.

  15. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1994-01-01

    The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

  16. Antigen Masking During Fixation and Embedding, Dissected

    PubMed Central

    Scalia, Carla Rossana; Boi, Giovanna; Bolognesi, Maddalena Maria; Riva, Lorella; Manzoni, Marco; DeSmedt, Linde; Bosisio, Francesca Maria; Ronchi, Susanna; Leone, Biagio Eugenio; Cattoretti, Giorgio

    2016-01-01

    Antigen masking in routinely processed tissue is a poorly understood process caused by multiple factors. We sought to dissect the effect on antigenicity of each step of processing by using frozen sections as proxies of the whole tissue. An equivalent extent of antigen masking occurs across variable fixation times at room temperature. Most antigens benefit from longer fixation times (>24 hr) for optimal detection after antigen retrieval (AR; for example, Ki-67, bcl-2, ER). The transfer to a graded alcohol series results in an enhanced staining effect, reproduced by treating the sections with detergents, possibly because of a better access of the polymeric immunohistochemical detection system to tissue structures. A second round of masking occurs upon entering the clearing agent, mostly at the paraffin embedding step. This may depend on the non-freezable water removal. AR fully reverses the masking due both to the fixation time and the paraffin embedding. AR itself destroys some epitopes which do not survive routine processing. Processed frozen sections are a tool to investigate fixation and processing requirements for antigens in routine specimens. PMID:27798289

  17. One Giant Leap for Categorizers: One Small Step for Categorization Theory

    PubMed Central

    Smith, J. David; Ell, Shawn W.

    2015-01-01

    We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587

  18. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  19. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  20. A Next-Generation Sequencing Strategy for Evaluating the Most Common Genetic Abnormalities in Multiple Myeloma.

    PubMed

    Jiménez, Cristina; Jara-Acevedo, María; Corchete, Luis A; Castillo, David; Ordóñez, Gonzalo R; Sarasquete, María E; Puig, Noemí; Martínez-López, Joaquín; Prieto-Conde, María I; García-Álvarez, María; Chillón, María C; Balanzategui, Ana; Alcoceba, Miguel; Oriol, Albert; Rosiñol, Laura; Palomera, Luis; Teruel, Ana I; Lahuerta, Juan J; Bladé, Joan; Mateos, María V; Orfão, Alberto; San Miguel, Jesús F; González, Marcos; Gutiérrez, Norma C; García-Sanz, Ramón

    2017-01-01

    Identification and characterization of genetic alterations are essential for diagnosis of multiple myeloma and may guide therapeutic decisions. Currently, genomic analysis of myeloma to cover the diverse range of alterations with prognostic impact requires fluorescence in situ hybridization (FISH), single nucleotide polymorphism arrays, and sequencing techniques, which are costly and labor intensive and require large numbers of plasma cells. To overcome these limitations, we designed a targeted-capture next-generation sequencing approach for one-step identification of IGH translocations, V(D)J clonal rearrangements, the IgH isotype, and somatic mutations to rapidly identify risk groups and specific targetable molecular lesions. Forty-eight newly diagnosed myeloma patients were tested with the panel, which included IGH and six genes that are recurrently mutated in myeloma: NRAS, KRAS, HRAS, TP53, MYC, and BRAF. We identified 14 of 17 IGH translocations previously detected by FISH and three confirmed translocations not detected by FISH, with the additional advantage of breakpoint identification, which can be used as a target for evaluating minimal residual disease. IgH subclass and V(D)J rearrangements were identified in 77% and 65% of patients, respectively. Mutation analysis revealed the presence of missense protein-coding alterations in at least one of the evaluating genes in 16 of 48 patients (33%). This method may represent a time- and cost-effective diagnostic method for the molecular characterization of multiple myeloma. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  1. Study Behaviors and USMLE Step 1 Performance: Implications of a Student Self-Directed Parallel Curriculum.

    PubMed

    Burk-Rafel, Jesse; Santen, Sally A; Purkiss, Joel

    2017-11-01

    To determine medical students' study behaviors when preparing for the United States Medical Licensing Examination (USMLE) Step 1, and how these behaviors are associated with Step 1 scores when controlling for likely covariates. The authors distributed a study-behaviors survey in 2014 and 2015 at their institution to two cohorts of medical students who had recently taken Step 1. Demographic and academic data were linked to responses. Descriptive statistics, bivariate correlations, and multiple linear regression analyses were performed. Of 332 medical students, 274 (82.5%) participated. Most students (n = 211; 77.0%) began studying for Step 1 during their preclinical curriculum, increasing their intensity during a protected study period during which they averaged 11.0 hours studying per day (standard deviation [SD] 2.1) over a period of 35.3 days (SD 6.2). Students used numerous third-party resources, including reading an exam-specific 700-page review book on average 2.1 times (SD 0.8) and completing an average of 3,597 practice multiple-choice questions (SD 1,611). Initiating study prior to the designated study period, increased review book usage, and attempting more practice questions were all associated with higher Step 1 scores, even when controlling for Medical College Admission Test scores, preclinical exam performance, and self-identified score goal (adjusted R = 0.56, P < .001). Medical students at one public institution engaged in a self-directed, "parallel" Step 1 curriculum using third-party study resources. Several study behaviors were associated with improved USMLE Step 1 performance, informing both institutional- and student-directed preparation for this high-stakes exam.

  2. Sterol homeostasis requires regulated degradation of squalene monooxygenase by the ubiquitin ligase Doa10/Teb4

    PubMed Central

    Foresti, Ombretta; Ruggiano, Annamaria; Hannibal-Bach, Hans K; Ejsing, Christer S; Carvalho, Pedro

    2013-01-01

    Sterol homeostasis is essential for the function of cellular membranes and requires feedback inhibition of HMGR, a rate-limiting enzyme of the mevalonate pathway. As HMGR acts at the beginning of the pathway, its regulation affects the synthesis of sterols and of other essential mevalonate-derived metabolites, such as ubiquinone or dolichol. Here, we describe a novel, evolutionarily conserved feedback system operating at a sterol-specific step of the mevalonate pathway. This involves the sterol-dependent degradation of squalene monooxygenase mediated by the yeast Doa10 or mammalian Teb4, a ubiquitin ligase implicated in a branch of the endoplasmic reticulum (ER)-associated protein degradation (ERAD) pathway. Since the other branch of ERAD is required for HMGR regulation, our results reveal a fundamental role for ERAD in sterol homeostasis, with the two branches of this pathway acting together to control sterol biosynthesis at different levels and thereby allowing independent regulation of multiple products of the mevalonate pathway. DOI: http://dx.doi.org/10.7554/eLife.00953.001 PMID:23898401

  3. Development of practical high temperature superconducting wire for electric power application

    NASA Technical Reports Server (NTRS)

    Hawsey, Robert A.; Sokolowski, Robert S.; Haldar, Pradeep; Motowidlo, Leszek R.

    1995-01-01

    The technology of high temperature superconductivity has gone from beyond mere scientific curiousity into the manufacturing environment. Single lengths of multifilamentary wire are now produced that are over 200 meters long and that carry over 13 amperes at 77 K. Short-sample critical current densities approach 5 x 104 A/sq cm at 77 K. Conductor requirements such as high critical current density in a magnetic field, strain-tolerant sheathing materials, and other engineering properties are addressed. A new process for fabricating round BSCCO-2212 wire has produced wires with critical current densities as high as 165,000 A/sq cm at 4.2 K and 53,000 A/sq cm at 40 K. This process eliminates the costly, multiple pressing and rolling steps that are commonly used to develop texture in the wires. New multifilamentary wires with strengthened sheathing materials have shown improved yield strengths up to a factor of five better than those made with pure silver. Many electric power devices require the wire to be formed into coils for production of strong magnetic fields. Requirements for coils and magnets for electric power applications are described.

  4. The 3D dynamics of the Cosserat rod as applied to continuum robotics

    NASA Astrophysics Data System (ADS)

    Jones, Charles Rees

    2011-12-01

    In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.

  5. Global analysis of gene expression reveals mRNA superinduction is required for the inducible immune response to a bacterial pathogen

    PubMed Central

    Barry, Kevin C; Ingolia, Nicholas T; Vance, Russell E

    2017-01-01

    The inducible innate immune response to infection requires a concerted process of gene expression that is regulated at multiple levels. Most global analyses of the innate immune response have focused on transcription induced by defined immunostimulatory ligands, such as lipopolysaccharide. However, the response to pathogens involves additional complexity, as pathogens interfere with virtually every step of gene expression. How cells respond to pathogen-mediated disruption of gene expression to nevertheless initiate protective responses remains unclear. We previously discovered that a pathogen-mediated blockade of host protein synthesis provokes the production of specific pro-inflammatory cytokines. It remains unclear how these cytokines are produced despite the global pathogen-induced block of translation. We addressed this question by using parallel RNAseq and ribosome profiling to characterize the response of macrophages to infection with the intracellular bacterial pathogen Legionella pneumophila. Our results reveal that mRNA superinduction is required for the inducible immune response to a bacterial pathogen. DOI: http://dx.doi.org/10.7554/eLife.22707.001 PMID:28383283

  6. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. A new method for registration of heterogeneous sensors in a dimensional measurement system

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Zhong; Fu, Luhua; Qu, Xinghua; Zhang, Heng; Liu, Changjie

    2017-10-01

    Registration of multiple sensors is a basic step in multi-sensor dimensional or coordinate measuring systems before any measurement. In most cases, a common standard is used to be measured by all sensors, and this may work well for general registration of multiple homogeneous sensors. However, when inhomogeneous sensors detect a common standard, it is usually very difficult to obtain the same information, because of the different working principles of the sensors. In this paper, a new method called multiple steps registration is proposed to register two sensors: a video camera sensor (VCS) and a tactile probe sensor (TPS). In this method, the two sensors measure two separated standards: a chrome circle on a reticle and a reference sphere with a constant distance between them, fixed on a steel plate. The VCS captures only the circle and the TPS touches only the sphere. Both simulations and real experiments demonstrate that the proposed method is robust and accurate in the registration of multiple inhomogeneous sensors in a dimensional measurement system.

  8. Performance evaluation of the multiple root node approach to the Rete pattern matcher for production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Gaudiot, J.-L.

    1991-12-31

    Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less

  9. Airspace Operations Demo Functional Requirements Matrix

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Flight IPT assessed the reasonableness of demonstrating each of the Access 5 Step 1 functional requirements. The functional requirements listed in this matrix are from the September 2005 release of the Access 5 Functional Requirements Document. The demonstration mission considered was a notional Western US mission (WUS). The conclusion of the assessment is that 90% of the Access 5 Step 1 functional requirements can be demonstrated using the notional Western US mission.

  10. Determination of enzyme thermal parameters for rational enzyme engineering and environmental/evolutionary studies.

    PubMed

    Lee, Charles K; Monk, Colin R; Daniel, Roy M

    2013-01-01

    Of the two independent processes by which enzymes lose activity with increasing temperature, irreversible thermal inactivation and rapid reversible equilibration with an inactive form, the latter is only describable by the Equilibrium Model. Any investigation of the effect of temperature upon enzymes, a mandatory step in rational enzyme engineering and study of enzyme temperature adaptation, thus requires determining the enzymes' thermodynamic parameters as defined by the Equilibrium Model. The necessary data for this procedure can be collected by carrying out multiple isothermal enzyme assays at 3-5°C intervals over a suitable temperature range. If the collected data meet requirements for V max determination (i.e., if the enzyme kinetics are "ideal"), then the enzyme's Equilibrium Model parameters (ΔH eq, T eq, ΔG (‡) cat, and ΔG (‡) inact) can be determined using a freely available iterative model-fitting software package designed for this purpose.Although "ideal" enzyme reactions are required for determination of all four Equilibrium Model parameters, ΔH eq, T eq, and ΔG (‡) cat can be determined from initial (zero-time) rates for most nonideal enzyme reactions, with substrate saturation being the only requirement.

  11. Implementation of Competency-Based Pharmacy Education (CBPE)

    PubMed Central

    Koster, Andries; Schalekamp, Tom; Meijerman, Irma

    2017-01-01

    Implementation of competency-based pharmacy education (CBPE) is a time-consuming, complicated process, which requires agreement on the tasks of a pharmacist, commitment, institutional stability, and a goal-directed developmental perspective of all stakeholders involved. In this article the main steps in the development of a fully-developed competency-based pharmacy curriculum (bachelor, master) are described and tips are given for a successful implementation. After the choice for entering into CBPE is made and a competency framework is adopted (step 1), intended learning outcomes are defined (step 2), followed by analyzing the required developmental trajectory (step 3) and the selection of appropriate assessment methods (step 4). Designing the teaching-learning environment involves the selection of learning activities, student experiences, and instructional methods (step 5). Finally, an iterative process of evaluation and adjustment of individual courses, and the curriculum as a whole, is entered (step 6). Successful implementation of CBPE requires a system of effective quality management and continuous professional development as a teacher. In this article suggestions for the organization of CBPE and references to more detailed literature are given, hoping to facilitate the implementation of CBPE. PMID:28970422

  12. Performance monitoring and response conflict resolution associated with choice stepping reaction tasks.

    PubMed

    Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei

    2016-11-01

    Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.

  13. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2016-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953

  14. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2014-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162

  15. Simultaneous determination of creatinine and creatine in human serum by double-spike isotope dilution liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS).

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; Añón Álvarez, M Elena; Rodríguez, Felix; Menéndez, Francisco V Álvarez; García Alonso, J Ignacio

    2015-04-07

    This work describes the first multiple spiking isotope dilution procedure for organic compounds using (13)C labeling. A double-spiking isotope dilution method capable of correcting and quantifying the creatine-creatinine interconversion occurring during the analytical determination of both compounds in human serum is presented. The determination of serum creatinine may be affected by the interconversion between creatine and creatinine during sample preparation or by inefficient chemical separation of those compounds by solid phase extraction (SPE). The methodology is based on the use differently labeled (13)C analogues ((13)C1-creatinine and (13)C2-creatine), the measurement of the isotopic distribution of creatine and creatinine by liquid chromatography-tandem mass spectrometry (LC-MS/MS) and the application of multiple linear regression. Five different lyophilized serum-based controls and two certified human serum reference materials (ERM-DA252a and ERM-DA253a) were analyzed to evaluate the accuracy and precision of the proposed double-spike LC-MS/MS method. The methodology was applied to study the creatine-creatinine interconversion during LC-MS/MS and gas chromatography-mass spectrometry (GC-MS) analyses and the separation efficiency of the SPE step required in the traditional gas chromatography-isotope dilution mass spectrometry (GC-IDMS) reference methods employed for the determination of serum creatinine. The analysis of real serum samples by GC-MS showed that creatine-creatinine separation by SPE can be a nonquantitative step that may induce creatinine overestimations up to 28% in samples containing high amounts of creatine. Also, a detectable conversion of creatine into creatinine was observed during sample preparation for LC-MS/MS. The developed double-spike LC-MS/MS improves the current state of the art for the determination of creatinine in human serum by isotope dilution mass spectrometry (IDMS), because corrections are made for all the possible errors derived from the sample preparation step.

  16. One-pot growth of two-dimensional lateral heterostructures via sequential edge-epitaxy

    NASA Astrophysics Data System (ADS)

    Sahoo, Prasana K.; Memaran, Shahriar; Xin, Yan; Balicas, Luis; Gutiérrez, Humberto R.

    2018-01-01

    Two-dimensional heterojunctions of transition-metal dichalcogenides have great potential for application in low-power, high-performance and flexible electro-optical devices, such as tunnelling transistors, light-emitting diodes, photodetectors and photovoltaic cells. Although complex heterostructures have been fabricated via the van der Waals stacking of different two-dimensional materials, the in situ fabrication of high-quality lateral heterostructures with multiple junctions remains a challenge. Transition-metal-dichalcogenide lateral heterostructures have been synthesized via single-step, two-step or multi-step growth processes. However, these methods lack the flexibility to control, in situ, the growth of individual domains. In situ synthesis of multi-junction lateral heterostructures does not require multiple exchanges of sources or reactors, a limitation in previous approaches as it exposes the edges to ambient contamination, compromises the homogeneity of domain size in periodic structures, and results in long processing times. Here we report a one-pot synthetic approach, using a single heterogeneous solid source, for the continuous fabrication of lateral multi-junction heterostructures consisting of monolayers of transition-metal dichalcogenides. The sequential formation of heterojunctions is achieved solely by changing the composition of the reactive gas environment in the presence of water vapour. This enables selective control of the water-induced oxidation and volatilization of each transition-metal precursor, as well as its nucleation on the substrate, leading to sequential edge-epitaxy of distinct transition-metal dichalcogenides. Photoluminescence maps confirm the sequential spatial modulation of the bandgap, and atomic-resolution images reveal defect-free lateral connectivity between the different transition-metal-dichalcogenide domains within a single crystal structure. Electrical transport measurements revealed diode-like responses across the junctions. Our new approach offers greater flexibility and control than previous methods for continuous growth of transition-metal-dichalcogenide-based multi-junction lateral heterostructures. These findings could be extended to other families of two-dimensional materials, and establish a foundation for the development of complex and atomically thin in-plane superlattices, devices and integrated circuits.

  17. SU-F-T-273: Using a Diode Array to Explore the Weakness of TPS DoseCalculation Algorithm for VMAT and Sliding Window Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J; Lu, B; Yan, G

    Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less

  18. Multiscale Simulation and Modeling of Multilayer Heteroepitactic Growth of C60 on Pentacene.

    PubMed

    Acevedo, Yaset M; Cantrell, Rebecca A; Berard, Philip G; Koch, Donald L; Clancy, Paulette

    2016-03-29

    We apply multiscale methods to describe the strained growth of multiple layers of C60 on a thin film of pentacene. We study this growth in the presence of a monolayer pentacene step to compare our simulations to recent experimental studies by Breuer and Witte of submonolayer growth in the presence of monolayer steps. The molecular-level details of this organic semiconductor interface have ramifications on the macroscale structural and electronic behavior of this system and allow us to describe several unexplained experimental observations for this system. The growth of a C60 thin film on a pentacene surface is complicated by the differing crystal habits of the two component species, leading to heteroepitactical growth. In order to probe this growth, we use three computational methods that offer different approaches to coarse-graining the system and differing degrees of computational efficiency. We present a new, efficient reaction-diffusion continuum model for 2D systems whose results compare well with mesoscale kinetic Monte Carlo (KMC) results for submonolayer growth. KMC extends our ability to simulate multiple layers but requires a library of predefined rates for event transitions. Coarse-grained molecular dynamics (CGMD) circumvents KMC's need for predefined lattices, allowing defects and grain boundaries to provide a more realistic thin film morphology. For multilayer growth, in this particularly suitable candidate for coarse-graining, CGMD is a preferable approach to KMC. Combining the results from these three methods, we show that the lattice strain induced by heteroepitactical growth promotes 3D growth and the creation of defects in the first monolayer. The CGMD results are consistent with experimental results on the same system by Conrad et al. and by Breuer and Witte in which C60 aggregates change from a 2D structure at low temperature to 3D clusters along the pentacene step edges at higher temperatures.

  19. Mortality and morbidity in necrotizing pancreatitis managed on principles of step-up approach: 7 years experience from a single surgical unit.

    PubMed

    Aparna, Deshpande; Kumar, Sunil; Kamalkumar, Shukla

    2017-10-27

    To determine percentage of patients of necrotizing pancreatitis (NP) requiring intervention and the types of interventions performed. Outcomes of patients of step up necrosectomy to those of direct necrosectomy were compared. Operative mortality, overall mortality, morbidity and overall length of stay were determined. After institutional ethics committee clearance and waiver of consent, records of patients of pancreatitis were reviewed. After excluding patients as per criteria, epidemiologic and clinical data of patients of NP was noted. Treatment protocol was reviewed. Data of patients in whom step-up approach was used was compared to those in whom it was not used. A total of 41 interventions were required in 39% patients. About 60% interventions targeted the pancreatic necrosis while the rest were required to deal with the complications of the necrosis. Image guided percutaneous catheter drainage was done in 9 patients for infected necrosis all of whom required further necrosectomy and in 3 patients with sterile necrosis. Direct retroperitoneal or anterior necrosectomy was performed in 15 patients. The average time to first intervention was 19.6 d in the non step-up group (range 11-36) vs 18.22 d in the Step-up group (range 13-25). The average hospital stay in non step-up group was 33.3 d vs 38 d in step up group. The mortality in the step-up group was 0% (0/9) vs 13% (2/15) in the non step up group. Overall mortality was 10.3% while post-operative mortality was 8.3%. Average hospital stay was 22.25 d. Early conservative management plays an important role in management of NP. In patients who require intervention, the approach used and the timing of intervention should be based upon the clinical condition and local expertise available. Delaying intervention and use of minimal invasive means when intervention is necessary is desirable. The step-up approach should be used whenever possible. Even when the classical retroperitoneal catheter drainage is not feasible, there should be an attempt to follow principles of step-up technique to buy time. The outcome of patients in the step-up group compared to the non step-up group is comparable in our series. Interventions for bowel diversion, bypass and hemorrhage control should be done at the appropriate times.

  20. Quick and Easy Adaptations and Accommodations for Early Childhood Students

    ERIC Educational Resources Information Center

    Breitfelder, Leisa M.

    2008-01-01

    Research-based information is used to support the idea of the use of adaptations and accommodations for early childhood students who have varying disabilities. Multiple adaptations and accommodations are outlined. A step-by-step plan is provided on how to make specific adaptations and accommodations to fit the specific needs of early childhood…

  1. Stutter-Step Models of Performance in School

    ERIC Educational Resources Information Center

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  2. Discussion of "Simple design criterion for residual energy on embankment dam stepped spillways" by Stefan Felder and Hubert Chanson

    USDA-ARS?s Scientific Manuscript database

    Researchers from the University of Queensland of New South Wales provided guidance to designers regarding the hydraulic performance of embankment dam stepped spillways. Their research compares a number of high-quality physical model data sets from multiple laboratories, emphasizing the variability ...

  3. Evidence-Based Assessment of Attention-Deficit/Hyperactivity Disorder: Using Multiple Sources of Information

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.

    2006-01-01

    In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…

  4. SRSF3 represses the expression of PDCD4 protein by coordinated regulation of alternative splicing, export and translation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seung Kuk; Jeong, Sunjoo, E-mail: sjsj@dankook.ac.kr

    2016-02-05

    Gene expression is regulated at multiple steps, such as transcription, splicing, export, degradation and translation. Considering diverse roles of SR proteins, we determined whether the tumor-related splicing factor SRSF3 regulates the expression of the tumor-suppressor protein, PDCD4, at multiple steps. As we have reported previously, knockdown of SRSF3 increased the PDCD4 protein level in SW480 colon cancer cells. More interestingly, here we showed that the alternative splicing and the nuclear export of minor isoforms of pdcd4 mRNA were repressed by SRSF3, but the translation step was unaffected. In contrast, only the translation step of the major isoform of pdcd4 mRNAmore » was repressed by SRSF3. Therefore, overexpression of SRSF3 might be relevant to the repression of all isoforms of PDCD4 protein levels in most types of cancer cell. We propose that SRSF3 could act as a coordinator of the expression of PDCD4 protein via two mechanisms on two alternatively spliced mRNA isoforms.« less

  5. Orthogonal tandem catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lohr, Tracy L.; Marks, Tobin J.

    2015-05-20

    Tandem catalysis is a growing field that is beginning to yield important scientific and technological advances toward new and more efficient catalytic processes. 'One-pot' tandem reactions, where multiple catalysts and reagents, combined in a single reaction vessel undergo a sequence of precisely staged catalytic steps, are highly attractive from the standpoint of reducing both waste and time. Orthogonal tandem catalysis is a subset of one-pot reactions in which more than one catalyst is used to promote two or more mechanistically distinct reaction steps. This Perspective summarizes and analyses some of the recent developments and successes in orthogonal tandem catalysis, withmore » particular focus on recent strategies to address catalyst incompatibility. We also highlight the concept of thermodynamic leveraging by coupling multiple catalyst cycles to effect challenging transformations not observed in single-step processes, and to encourage application of this technique to energetically unfavourable or demanding reactions.« less

  6. Managing quality and compliance.

    PubMed

    McNeil, Alice; Koppel, Carl

    2015-01-01

    Critical care nurses assume vital roles in maintaining patient care quality. There are distinct facets to the process including standard setting, regulatory compliance, and completion of reports associated with these endeavors. Typically, multiple niche software applications are required and user interfaces are varied and complex. Although there are distinct quality indicators that must be tracked as well as a list of serious or sentinel events that must be documented and reported, nurses may not know the precise steps to ensure that information is properly documented and actually reaches the proper authorities for further investigation and follow-up actions. Technology advances have permitted the evolution of a singular software platform, capable of monitoring quality indicators and managing all facets of reporting associated with regulatory compliance.

  7. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  8. Plant Endoplasmic Reticulum-Plasma Membrane Contact Sites.

    PubMed

    Wang, Pengwei; Hawes, Chris; Hussey, Patrick J

    2017-04-01

    The endoplasmic reticulum (ER) acts as a superhighway with multiple sideroads that connects the different membrane compartments including the ER to the plasma membrane (PM). ER-PM contact sites (EPCSs) are a common feature in eukaryotic organisms, but have not been studied well in plants owing to the lack of molecular markers and to the difficulty in resolving the EPCS structure using conventional microscopy. Recently, however, plant protein complexes required for linking the ER and PM have been identified. This is a further step towards understanding the structure and function of plant EPCSs. We highlight some recent studies in this field and suggest several hypotheses that relate to the possible function of EPCSs in plants. Copyright © 2016. Published by Elsevier Ltd.

  9. An Advice Mechanism for Heterogeneous Robot Teams

    NASA Astrophysics Data System (ADS)

    Daniluk, Steven

    The use of reinforcement learning for robot teams has enabled complex tasks to be performed, but at the cost of requiring a large amount of exploration. Exchanging information between robots in the form of advice is one method to accelerate performance improvements. This thesis presents an advice mechanism for robot teams that utilizes advice from heterogeneous advisers via a method guaranteeing convergence to an optimal policy. The presented mechanism has the capability to use multiple advisers at each time step, and decide when advice should be requested and accepted, such that the use of advice decreases over time. Additionally, collective collaborative, and cooperative behavioural algorithms are integrated into a robot team architecture, to create a new framework that provides fault tolerance and modularity for robot teams.

  10. Simplified filtered Smith predictor for MIMO processes with multiple time delays.

    PubMed

    Santos, Tito L M; Torrico, Bismark C; Normey-Rico, Julio E

    2016-11-01

    This paper proposes a simplified tuning strategy for the multivariable filtered Smith predictor. It is shown that offset-free control can be achieved with step references and disturbances regardless of the poles of the primary controller, i.e., integral action is not explicitly required. This strategy reduces the number of design parameters and simplifies tuning procedure because the implicit integrative poles are not considered for design purposes. The simplified approach can be used to design continuous-time or discrete-time controllers. Three case studies are used to illustrate the advantages of the proposed strategy if compared with the standard approach, which is based on the explicit integrative action. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Robust one-Tube Ω-PCR Strategy Accelerates Precise Sequence Modification of Plasmids for Functional Genomics

    PubMed Central

    Chen, Letian; Wang, Fengpin; Wang, Xiaoyu; Liu, Yao-Guang

    2013-01-01

    Functional genomics requires vector construction for protein expression and functional characterization of target genes; therefore, a simple, flexible and low-cost molecular manipulation strategy will be highly advantageous for genomics approaches. Here, we describe a Ω-PCR strategy that enables multiple types of sequence modification, including precise insertion, deletion and substitution, in any position of a circular plasmid. Ω-PCR is based on an overlap extension site-directed mutagenesis technique, and is named for its characteristic Ω-shaped secondary structure during PCR. Ω-PCR can be performed either in two steps, or in one tube in combination with exonuclease I treatment. These strategies have wide applications for protein engineering, gene function analysis and in vitro gene splicing. PMID:23335613

  12. GRAPEVINE: Grids about anything by Poisson's equation in a visually interactive networking environment

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.; Mccann, Karen

    1992-01-01

    A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.

  13. Epoxy-based broadband antireflection coating for millimeter-wave optics.

    PubMed

    Rosen, Darin; Suzuki, Aritoki; Keating, Brian; Krantz, William; Lee, Adrian T; Quealy, Erin; Richards, Paul L; Siritanasak, Praween; Walker, William

    2013-11-20

    We have developed epoxy-based, broadband antireflection coatings for millimeter-wave astrophysics experiments with cryogenic optics. By using multiple-layer coatings where each layer steps in dielectric constant, we achieved low reflection over a wide bandwidth. We suppressed the reflection from an alumina disk to 10% over fractional bandwidths of 92% and 104% using two-layer and three-layer coatings, respectively. The dielectric constants of epoxies were tuned between 2.06 and 7.44 by mixing three types of epoxy and doping with strontium titanate powder required for the high dielectric mixtures. At 140 K, the band-integrated absorption loss in the coatings was suppressed to less than 1% for the two-layer coating, and below 10% for the three-layer coating.

  14. Relationships between trunk performance, gait and postural control in persons with multiple sclerosis.

    PubMed

    Freund, Jane E; Stetts, Deborah M; Vallabhajosula, Srikant

    2016-06-30

    Multiple sclerosis (MS) is a chronic progressive disease of the central nervous system. Compared to healthy individuals, persons with multiple sclerosis (PwMS) have increased postural sway in quiet stance, decreased gait speed and increased fall incidence. Trunk performance has been implicated in postural control, gait dysfunction, and fall prevention in older adults. However, the relationship of trunk performance to postural control and gait has not been adequately studied in PwMS. To compare trunk muscle structure and performance in PwMS to healthy age and gendered-matched controls (HC); to determine the effects of isometric trunk endurance testing on postural control in both populations; and to determine the relationship of trunk performance with postural control, gait and step activity in PwMS. Fifteen PwMS and HC completed ultrasound imaging of trunk muscles, 10 m walk test, isometric trunk endurance tests, and postural sway test. Participants wore a step activity monitor for 7 days. PwMS had worse isometric trunk endurance compared to HC. PwMS trunk flexion endurance negatively correlated to several postural control measures and positively correlated to gait speed and step activity. Clinicians should consider evaluation and interventions directed at impaired trunk endurance in PwMS.

  15. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  16. Habitat continuity and stepping-stone oceanographic distances explain population genetic connectivity of the brown alga Cystoseira amentacea.

    PubMed

    Buonomo, Roberto; Assis, Jorge; Fernandes, Francisco; Engelen, Aschwin H; Airoldi, Laura; Serrão, Ester A

    2017-02-01

    Effective predictive and management approaches for species occurring in a metapopulation structure require good understanding of interpopulation connectivity. In this study, we ask whether population genetic structure of marine species with fragmented distributions can be predicted by stepping-stone oceanographic transport and habitat continuity, using as model an ecosystem-structuring brown alga, Cystoseira amentacea var. stricta. To answer this question, we analysed the genetic structure and estimated the connectivity of populations along discontinuous rocky habitat patches in southern Italy, using microsatellite markers at multiple scales. In addition, we modelled the effect of rocky habitat continuity and ocean circulation on gene flow by simulating Lagrangian particle dispersal based on ocean surface currents allowing multigenerational stepping-stone dynamics. Populations were highly differentiated, at scales from few metres up to thousands of kilometres. The best possible model fit to explain the genetic results combined current direction, rocky habitat extension and distance along the coast among rocky sites. We conclude that a combination of variable suitable habitat and oceanographic transport is a useful predictor of genetic structure. This relationship provides insight into the mechanisms of dispersal and the role of life-history traits. Our results highlight the importance of spatially explicit modelling of stepping-stone dynamics and oceanographic directional transport coupled with habitat suitability, to better describe and predict marine population structure and differentiation. This study also suggests the appropriate spatial scales for the conservation, restoration and management of species that are increasingly affected by habitat modifications. © 2016 John Wiley & Sons Ltd.

  17. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  18. Characteristics of U.S. Veteran Patients with Major Depressive Disorder who require "next-step" treatments: A VAST-D report.

    PubMed

    Zisook, Sidney; Tal, Ilanit; Weingart, Kimberly; Hicks, Paul; Davis, Lori L; Chen, Peijun; Yoon, Jean; Johnson, Gary R; Vertrees, Julia E; Rao, Sanjai; Pilkinton, Patricia D; Wilcox, James A; Sapra, Mamta; Iranmanesh, Ali; Huang, Grant D; Mohamed, Somaia

    2016-12-01

    Finding effective and lasting treatments for patients with Major Depressive Disorder (MDD) that fail to respond optimally to initial standard treatment is a critical public health imperative. Understanding the nature and characteristics of patients prior to initiating "next-step" treatment is an important component of identifying which specific treatments are best suited for individual patients. We describe clinical features and demographic characteristics of a sample of Veterans who enrolled in a "next-step" clinical trial after failing to achieve an optimal outcome from at least one well-delivered antidepressant trial. 1522 Veteran outpatients with nonpsychotic MDD completed assessments prior to being randomized to study treatment. Data is summarized and presented in terms of demographic, social, historical and clinical features and compared to a similar, non-Veteran sample. Participants were largely male and white, with about half unmarried and half unemployed. They were moderately severely depressed, with about one-third reporting recent suicidal ideation. More than half had chronic and/or recurrent depression. General medical and psychiatric comorbidities were highly prevalent, particularly PTSD. Many had histories of childhood adversity and bereavement. Participants were impaired in multiple domains of their lives and had negative self-worth. These results may not be generalizable to females, and some characteristics may be specific to Veterans of US military service. There was insufficient data on age of clinical onset and depression subtypes, and three novel measures were not psychometrically validated. Characterizing VAST-D participants provides important information to help clinicians understand features that may optimize "next-step" MDD treatments. Published by Elsevier B.V.

  19. Stepped care versus face-to-face cognitive behavior therapy for panic disorder and social anxiety disorder: Predictors and moderators of outcome.

    PubMed

    Haug, Thomas; Nordgreen, Tine; Öst, Lars-Göran; Kvale, Gerd; Tangen, Tone; Andersson, Gerhard; Carlbring, Per; Heiervang, Einar R; Havik, Odd E

    2015-08-01

    To investigate predictors and moderators of treatment outcome by comparing immediate face-to-face cognitive behavioral therapy (FtF-CBT) to a Stepped Care treatment model comprising three steps: Psychoeducation, Internet-delivered CBT, and FtF-CBT for panic disorder (PD) and social anxiety disorder (SAD). Patients (N = 173) were recruited from nine public mental health out-patient clinics and randomized to immediate FtF-CBT or Stepped Care treatment. Characteristics related to social functioning, impairment from the anxiety disorder, and comorbidity was investigated as predictors and moderators by treatment format and diagnosis in multiple regression analyses. Lower social functioning, higher impairment from the anxiety disorder, and a comorbid cluster C personality disorder were associated with significantly less improvement, particularly among patients with PD. Furthermore, having a comorbid anxiety disorder was associated with a better treatment outcome among patients with PD but not patients with SAD. Patients with a comorbid depression had similar outcomes from the different treatments, but patients without comorbid depression had better outcomes from immediate FtF-CBT compared to guided self-help. In general, the same patient characteristics appear to be associated with the treatment outcome for CBT provided in low- and high-intensity formats when treated in public mental health care clinics. The findings suggest that patients with lower social functioning and higher impairment from their anxiety disorder benefit less from these treatments and may require more adapted and extensive treatment. CLINICALTRIALS.GOV: Identifier: NCT00619138. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Atomic Resolution in Situ Imaging of a Double-Bilayer Multistep Growth Mode in Gallium Nitride Nanowires

    DOE PAGES

    Gamalski, A. D.; Tersoff, J.; Stach, E. A.

    2016-04-13

    We study the growth of GaN nanowires from liquid Au–Ga catalysts using environmental transmission electron microscopy. GaN wires grow in either (11¯20) or (11¯00) directions, by the addition of {11¯00} double bilayers via step flow with multiple steps. Step-train growth is not typically seen with liquid catalysts, and we suggest that it results from low step mobility related to the unusual double-height step structure. Finally, the results here illustrate the surprising dynamics of catalytic GaN wire growth at the nanoscale and highlight striking differences between the growth of GaN and other III–V semiconductor nanowires.

Top