Delay Tolerant Networking - Bundle Protocol Simulation
NASA Technical Reports Server (NTRS)
SeGui, John; Jenning, Esther
2006-01-01
In this paper, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the useof MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions.
A Generalized Model of E-trading for GSR Fair Exchange Protocol
NASA Astrophysics Data System (ADS)
Konar, Debajyoti; Mazumdar, Chandan
In this paper we propose a generalized model of E-trading for the development of GSR Fair Exchange Protocols. Based on the model, a method is narrated to implement E-trading protocols that ensure fairness in true sense without using an additional trusted third party for which either party has to pay. The model provides the scope to include the correctness of the product, money atomicity and customer's anonymity properties within E-trading protocol. We conclude this paper by indicating the area of applicability for our model.
Investigation of Interference Models for RFID Systems.
Zhang, Linchao; Ferrero, Renato; Gandino, Filippo; Rebaudengo, Maurizio
2016-02-04
The reader-to-reader collision in an RFID system is a challenging problem for communications technology. In order to model the interference between RFID readers, different interference models have been proposed, mainly based on two approaches: single and additive interference. The former only considers the interference from one reader within a certain range, whereas the latter takes into account the sum of all of the simultaneous interferences in order to emulate a more realistic behavior. Although the difference between the two approaches has been theoretically analyzed in previous research, their effects on the estimated performance of the reader-to-reader anti-collision protocols have not yet been investigated. In this paper, the influence of the interference model on the anti-collision protocols is studied by simulating a representative state-of-the-art protocol. The results presented in this paper highlight that the use of additive models, although more computationally intensive, is mandatory to improve the performance of anti-collision protocols.
Small-molecule ligand docking into comparative models with Rosetta
Combs, Steven A; DeLuca, Samuel L; DeLuca, Stephanie H; Lemmon, Gordon H; Nannemann, David P; Nguyen, Elizabeth D; Willis, Jordan R; Sheehan, Jonathan H; Meiler, Jens
2017-01-01
Structure-based drug design is frequently used to accelerate the development of small-molecule therapeutics. Although substantial progress has been made in X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy, the availability of high-resolution structures is limited owing to the frequent inability to crystallize or obtain sufficient NMR restraints for large or flexible proteins. Computational methods can be used to both predict unknown protein structures and model ligand interactions when experimental data are unavailable. This paper describes a comprehensive and detailed protocol using the Rosetta modeling suite to dock small-molecule ligands into comparative models. In the protocol presented here, we review the comparative modeling process, including sequence alignment, threading and loop building. Next, we cover docking a small-molecule ligand into the protein comparative model. In addition, we discuss criteria that can improve ligand docking into comparative models. Finally, and importantly, we present a strategy for assessing model quality. The entire protocol is presented on a single example selected solely for didactic purposes. The results are therefore not representative and do not replace benchmarks published elsewhere. We also provide an additional tutorial so that the user can gain hands-on experience in using Rosetta. The protocol should take 5–7 h, with additional time allocated for computer generation of models. PMID:23744289
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, M.D.; Saum-Manning, L.; Houck, F.
Volume I of this Review traces the origins of the Model Additional Protocol. It covers the period from 1991, when events in Iraq triggered an intensive review of the safeguards system, until 1996, when the IAEA Board of Governors established Committee 24 to negotiate a new protocol to safeguards agreement. The period from 1991-1996 set the stage for this negotiation and shaped its outcome in important ways. During this 5-year period, many proposals for strengthening safeguards were suggested and reviewed. Some proposals were dropped, for example, the suggestion by the IAEA Secretariat to verify certain imports, and others were refined.more » A rough consensus was established about the directions in which the international community wanted to go, and this was reflected in the draft of an additional protocol that was submitted to the IAEA Board of Governors on May 6, 1996 in document GOV/2863, Strengthening the Effectiveness and Improving the Efficiency of the Safeguards System - Proposals For Implementation Under Complementary Legal Authority, A Report by the Director General. This document ended with a recommendation that, 'the Board, through an appropriate mechanism, finalize the required legal instrument taking as a basis the draft protocol proposed by the Secretariat and the explanation of the measures contained in this document.'« less
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2007-01-01
This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.
A standard protocol for describing individual-based and agent-based models
Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.
2006-01-01
Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.
NASA Astrophysics Data System (ADS)
Hartley, Christopher Ahlvin
Current building energy auditing techniques are outdated and lack targeted, actionable information. These analyses only use one year's worth of monthly electricity and gas bills to define energy conservation and efficiency measures. These limited data sets cannot provide robust, directed energy reduction recommendations. The need is apparent for an overhaul of existing energy audit protocols to utilize all data that is available from the building's utility provider, installed energy management system (EMS), and sub-metering devices. This thesis analyzed the current state-of-the-art in energy audits, generated a next generation energy audit protocol, and conducted both audits types on four case study buildings to find out what additional information can be obtained from additional data sources and increased data gathering resolutions. Energy data from each case study building were collected using a variety of means including utility meters, whole building energy meters, EMS systems, and sub-metering devices. In addition to conducting an energy analysis for each case study building using the current and next generation energy audit protocols, two building energy models were created using the programs eQuest and EnergyPlus. The current and next generation energy audit protocol results were compared to one another upon completion. The results show that using the current audit protocols, only variations in season are apparent. Results from the developed next generation energy audit protocols show that in addition to seasonal variations, building heating, ventilation and air conditioning (HVAC) schedules, occupancy schedules, baseline and peak energy demand levels, and malfunctioning equipment can be found. This new protocol may also be used to quickly generate accurate building models because of the increased resolution that yields scheduling information. The developed next generation energy auditing protocol is scalable and can work for many building types across the United States, and perhaps the world.
Simulation Modeling and Performance Evaluation of Space Networks
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Segui, John
2006-01-01
In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions
The Role of Additional Pulses in Electropermeabilization Protocols
Suárez, Cecilia; Soba, Alejandro; Maglietti, Felipe; Olaiz, Nahuel; Marshall, Guillermo
2014-01-01
Electropermeabilization (EP) based protocols such as those applied in medicine, food processing or environmental management, are well established and widely used. The applied voltage, as well as tissue electric conductivity, are of utmost importance for assessing final electropermeabilized area and thus EP effectiveness. Experimental results from literature report that, under certain EP protocols, consecutive pulses increase tissue electric conductivity and even the permeabilization amount. Here we introduce a theoretical model that takes into account this effect in the application of an EP-based protocol, and its validation with experimental measurements. The theoretical model describes the electric field distribution by a nonlinear Laplace equation with a variable conductivity coefficient depending on the electric field, the temperature and the quantity of pulses, and the Penne's Bioheat equation for temperature variations. In the experiments, a vegetable tissue model (potato slice) is used for measuring electric currents and tissue electropermeabilized area in different EP protocols. Experimental measurements show that, during sequential pulses and keeping constant the applied voltage, the electric current density and the blackened (electropermeabilized) area increase. This behavior can only be attributed to a rise in the electric conductivity due to a higher number of pulses. Accordingly, we present a theoretical modeling of an EP protocol that predicts correctly the increment in the electric current density observed experimentally during the addition of pulses. The model also demonstrates that the electric current increase is due to a rise in the electric conductivity, in turn induced by temperature and pulse number, with no significant changes in the electric field distribution. The EP model introduced, based on a novel formulation of the electric conductivity, leads to a more realistic description of the EP phenomenon, hopefully providing more accurate predictions of treatment outcomes. PMID:25437512
A Secure Authenticated Key Exchange Protocol for Credential Services
NASA Astrophysics Data System (ADS)
Shin, Seonghan; Kobara, Kazukuni; Imai, Hideki
In this paper, we propose a leakage-resilient and proactive authenticated key exchange (called LRP-AKE) protocol for credential services which provides not only a higher level of security against leakage of stored secrets but also secrecy of private key with respect to the involving server. And we show that the LRP-AKE protocol is provably secure in the random oracle model with the reduction to the computational Difie-Hellman problem. In addition, we discuss about some possible applications of the LRP-AKE protocol.
Koparde, Vishal N.; Scarsdale, J. Neel; Kellogg, Glen E.
2011-01-01
Background The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery. Methodology An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology. Results The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. PMID:21246043
Near-optimal protocols in complex nonequilibrium transformations
Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...
2016-08-29
The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less
Nam, Junghyun; Choo, Kim-Kwang Raymond; Paik, Juryon; Won, Dongho
2014-01-01
While a number of protocols for password-only authenticated key exchange (PAKE) in the 3-party setting have been proposed, it still remains a challenging task to prove the security of a 3-party PAKE protocol against insider dictionary attacks. To the best of our knowledge, there is no 3-party PAKE protocol that carries a formal proof, or even definition, of security against insider dictionary attacks. In this paper, we present the first 3-party PAKE protocol proven secure against both online and offline dictionary attacks as well as insider and outsider dictionary attacks. Our construct can be viewed as a protocol compiler that transforms any 2-party PAKE protocol into a 3-party PAKE protocol with 2 additional rounds of communication. We also present a simple and intuitive approach of formally modelling dictionary attacks in the password-only 3-party setting, which significantly reduces the complexity of proving the security of 3-party PAKE protocols against dictionary attacks. In addition, we investigate the security of the well-known 3-party PAKE protocol, called GPAKE, due to Abdalla et al. (2005, 2006), and demonstrate that the security of GPAKE against online dictionary attacks depends heavily on the composition of its two building blocks, namely a 2-party PAKE protocol and a 3-party key distribution protocol.
CREATION OF THE MODEL ADDITIONAL PROTOCOL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houck, F.; Rosenthal, M.; Wulf, N.
In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member Statesmore » and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.« less
Nam, Junghyun; Choo, Kim-Kwang Raymond
2014-01-01
While a number of protocols for password-only authenticated key exchange (PAKE) in the 3-party setting have been proposed, it still remains a challenging task to prove the security of a 3-party PAKE protocol against insider dictionary attacks. To the best of our knowledge, there is no 3-party PAKE protocol that carries a formal proof, or even definition, of security against insider dictionary attacks. In this paper, we present the first 3-party PAKE protocol proven secure against both online and offline dictionary attacks as well as insider and outsider dictionary attacks. Our construct can be viewed as a protocol compiler that transforms any 2-party PAKE protocol into a 3-party PAKE protocol with 2 additional rounds of communication. We also present a simple and intuitive approach of formally modelling dictionary attacks in the password-only 3-party setting, which significantly reduces the complexity of proving the security of 3-party PAKE protocols against dictionary attacks. In addition, we investigate the security of the well-known 3-party PAKE protocol, called GPAKE, due to Abdalla et al. (2005, 2006), and demonstrate that the security of GPAKE against online dictionary attacks depends heavily on the composition of its two building blocks, namely a 2-party PAKE protocol and a 3-party key distribution protocol. PMID:25309956
Yu, Wenbo; Lakkaraju, Sirish Kaushik; Raman, E. Prabhu; Fang, Lei; MacKerell, Alexander D.
2015-01-01
Receptor-based pharmacophore modeling is an efficient computer-aided drug design technique that uses the structure of the target protein to identify novel leads. However, most methods consider protein flexibility and desolvation effects in a very approximate way, which may limit their use in practice. The Site-Identification by Ligand Competitive Saturation (SILCS) assisted pharmacophore modeling protocol (SILCS-Pharm) was introduced recently to address these issues as SILCS naturally takes both protein flexibility and desolvation effects into account by using full MD simulations to determine 3D maps of the functional group-affinity patterns on a target receptor. In the present work, the SILCS-Pharm protocol is extended to use a wider range of probe molecules including benzene, propane, methanol, formamide, acetaldehyde, methylammonium, acetate and water. This approach removes the previous ambiguity brought by using water as both the hydrogen-bond donor and acceptor probe molecule. The new SILCS-Pharm protocol is shown to yield improved screening results as compared to the previous approach based on three target proteins. Further validation of the new protocol using five additional protein targets showed improved screening compared to those using common docking methods, further indicating improvements brought by the explicit inclusion of additional feature types associated with the wider collection of probe molecules in the SILCS simulations. The advantage of using complementary features and volume constraints, based on exclusion maps of the protein defined from the SILCS simulations, is presented. In addition, re-ranking using SILCS-based ligand grid free energies is shown to enhance the diversity of identified ligands for the majority of targets. These results suggest that the SILCS-Pharm protocol will be of utility in rational drug design. PMID:25622696
NASA Astrophysics Data System (ADS)
Jerome, N. P.; d'Arcy, J. A.; Feiweier, T.; Koh, D.-M.; Leach, M. O.; Collins, D. J.; Orton, M. R.
2016-12-01
The bi-exponential intravoxel-incoherent-motion (IVIM) model for diffusion-weighted MRI (DWI) fails to account for differential T 2 s in the model compartments, resulting in overestimation of pseudodiffusion fraction f. An extended model, T2-IVIM, allows removal of the confounding echo-time (TE) dependence of f, and provides direct compartment T 2 estimates. Two consented healthy volunteer cohorts (n = 5, 6) underwent DWI comprising multiple TE/b-value combinations (Protocol 1: TE = 62-102 ms, b = 0-250 mm-2s, 30 combinations. Protocol 2: 8 b-values 0-800 mm-2s at TE = 62 ms, with 3 additional b-values 0-50 mm-2s at TE = 80, 100 ms scanned twice). Data from liver ROIs were fitted with IVIM at individual TEs, and with the T2-IVIM model using all data. Repeat-measures coefficients of variation were assessed for Protocol 2. Conventional IVIM modelling at individual TEs (Protocol 1) demonstrated apparent f increasing with longer TE: 22.4 ± 7% (TE = 62 ms) to 30.7 ± 11% (TE = 102 ms) T2-IVIM model fitting accounted for all data variation. Fitting of Protocol 2 data using T2-IVIM yielded reduced f estimates (IVIM: 27.9 ± 6%, T2-IVIM: 18.3 ± 7%), as well as T 2 = 42.1 ± 7 ms, 77.6 ± 30 ms for true and pseudodiffusion compartments, respectively. A reduced Protocol 2 dataset yielded comparable results in a clinical time frame (11 min). The confounding dependence of IVIM f on TE can be accounted for using additional b/TE images and the extended T2-IVIM model.
A Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2011-01-01
This report presents a self-stabilizing distributed clock synchronization protocol in the absence of faults in the system. It is focused on the distributed clock synchronization of an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. We present an outline of a deductive proof of the correctness of the protocol. A model of the protocol was mechanically verified using the Symbolic Model Verifier (SMV) for a variety of topologies. Results of the mechanical proof of the correctness of the protocol are provided. The model checking results have verified the correctness of the protocol as they apply to the networks with unidirectional and bidirectional links. In addition, the results confirm the claims of determinism and linear convergence. As a result, we conjecture that the protocol solves the general case of this problem. We also present several variations of the protocol and discuss that this synchronization protocol is indeed an emergent system.
Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja
2015-01-01
The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.
Protein structure modeling for CASP10 by multiple layers of global optimization.
Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2014-02-01
In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.
Seo, Eunhui; Kang, Hwansu; Lim, Oh-Kyung; Jun, Hee-Sook
2018-05-24
Mature skeletal muscle cells cannot be expanded in culture systems. Therefore, it is difficult to construct an in vitro model for muscle diseases. To establish an efficient protocol for myogenic differentiation of human adipose tissue-derived stem cells (hADSCs), we investigated whether addition of IL-6 and/or myocyte-conditioned media (CM) to conventional differentiation media can shorten the differentiation period. hADSCs were differentiated to myocytes using the conventional protocol or modified with the addition of 25 pg/mL IL-6 and/or C2C12 CM (25% v / v ). The expression of MyoD and myogenine mRNA was significantly higher at 5⁻6 days after differentiation using the modified protocol than with the conventional protocol. mRNA and protein expression of myosin heavy chain, a marker of myotubes, was significantly upregulated at 28 and 42 days of differentiation using the modified protocol, and the level achieved after a 4-week differentiation period was similar to that achieved at 6 weeks using the conventional protocol. The expression of p-STAT3 was significantly increased when the modified protocol was used. Similarly, addition of colivelin, a STAT3 activator, instead of IL-6 and C2C12 CM, promoted the myogenic differentiation of ADSCs. The modified protocol improved differentiation efficiency and reduced the time required for differentiation of myocytes. It might be helpful to save cost and time when preparing myocytes for cell therapies and drug discovery.
Operational Implementation of a 2-Hour Prebreathe Protocol for International Space Station
NASA Technical Reports Server (NTRS)
Waligora, James M.; Conkin, J.; Foster, P. P.; Schneider, S.; Loftin, Karin C.; Gernhardt, Michael L.; Vann, R.
2000-01-01
Procedures, equipment, and analytical techniques were developed to implement the ground tested 2-hour protocol in-flight operations. The methods are: 1) The flight protocol incorporates additional safety margin over the ground tested protocol. This includes up to 20 min of additional time on enriched O2 during suit purge and pressure check, increased duration of extravehicular activity (EVA) preparation exercise during O2 prebreathing (up to 90 min vs; the tested 24 min), and reduced rates of depressurization. The ground test observations were combined with model projections of the conservative measures (using statistical models from Duke University and NASA JSQ to bound the risk of Type I and Type II decompression sickness (DCS). 2) An inflight exercise device using the in-flight ergometer and elastic tubes for upper body exercise was developed to replicate the dual cycle exercise in the ground trials. 3) A new in-flight breathing system was developed and man-tested. 4) A process to monitor inflight experience with the protocol, including the use of an in-suit Doppler bubble monitor when available, was developed. The results are: 1) The model projections of the conservative factors of the operational protocol were shown to reduce the risk of DCS to levels consistent with the observations of no DCS to date in the shuttle program. 2) Cross over trials of the dual cycle ergometer used in ground tests and the in-flight exercise system verified that02consumption and the % division of work between upper and lower body was not significantly different at the p= 0.05 level. 3) The in-flight breathing system was demonstrated to support work rates generating 75% O2(max) in 95 percentile subjects. 4) An in-flight monitoring plan with acceptance criteria was put in place for the 2-hour prebreathe protocol. And the conclusions are: The 2-hour protocol has been approved for flight, and all implementation efforts are in place to allow use of the protocol as early as flight ISS 7A, now scheduled in November of 2000.
Quantum processing by remote quantum control
NASA Astrophysics Data System (ADS)
Qiang, Xiaogang; Zhou, Xiaoqi; Aungskunsiri, Kanin; Cable, Hugo; O'Brien, Jeremy L.
2017-12-01
Client-server models enable computations to be hosted remotely on quantum servers. We present a novel protocol for realizing this task, with practical advantages when using technology feasible in the near term. Client tasks are realized as linear combinations of operations implemented by the server, where the linear coefficients are hidden from the server. We report on an experimental demonstration of our protocol using linear optics, which realizes linear combination of two single-qubit operations by a remote single-qubit control. In addition, we explain when our protocol can remain efficient for larger computations, as well as some ways in which privacy can be maintained using our protocol.
NASA Technical Reports Server (NTRS)
Ruane, Alex; Rosenzweig, Cynthia; Elliott, Joshua; Antle, John
2015-01-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has been working since 2010 to construct a protocol-based framework enabling regional assessments (led by regional experts and modelers) that can provide consistent inputs to global economic and integrated assessment models. These global models can then relay important global-level information that drive regional decision-making and outcomes throughout an interconnected agricultural system. AgMIPs community of nearly 800 climate, crop, livestock, economics, and IT experts has improved the state-of-the-art through model intercomparisons, validation exercises, regional integrated assessments, and the launch of AgMIP programs on all six arable continents. AgMIP is now launching Coordinated Global and Regional Assessments (CGRA) of climate change impacts on agriculture and food security to link global and regional crop and economic models using a protocol-based framework. The CGRA protocols are being developed to utilize historical observations, climate projections, and RCPsSSPs from CMIP5 (and potentially CMIP6), and will examine stakeholder-driven agricultural development and adaptation scenarios to provide cutting-edge assessments of climate changes impact on agriculture and food security. These protocols will build on the foundation of established protocols from AgMIPs 30+ activities, and will emphasize the use of multiple models, scenarios, and scales to enable an accurate assessment of related uncertainties. The CGRA is also designed to provide the outputs necessary to feed into integrated assessment models (IAMs), nutrition and food security assessments, nitrogen and carbon cycle models, and additional impact-sector assessments (e.g., water resources, land-use, biomes, urban areas). This presentation will describe the current status of CGRA planning and initial prototype experiments to demonstrate key aspects of the protocols before wider implementation ahead of the IPCC Sixth Assessment Report.
NASA Astrophysics Data System (ADS)
Ruane, A. C.; Rosenzweig, C.; Antle, J. M.; Elliott, J. W.
2015-12-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has been working since 2010 to construct a protocol-based framework enabling regional assessments (led by regional experts and modelers) that can provide consistent inputs to global economic and integrated assessment models. These global models can then relay important global-level information that drive regional decision-making and outcomes throughout an interconnected agricultural system. AgMIP's community of nearly 800 climate, crop, livestock, economics, and IT experts has improved the state-of-the-art through model intercomparisons, validation exercises, regional integrated assessments, and the launch of AgMIP programs on all six arable continents. AgMIP is now launching Coordinated Global and Regional Assessments (CGRA) of climate change impacts on agriculture and food security to link global and regional crop and economic models using a protocol-based framework. The CGRA protocols are being developed to utilize historical observations, climate projections, and RCPs/SSPs from CMIP5 (and potentially CMIP6), and will examine stakeholder-driven agricultural development and adaptation scenarios to provide cutting-edge assessments of climate change's impact on agriculture and food security. These protocols will build on the foundation of established protocols from AgMIP's 30+ activities, and will emphasize the use of multiple models, scenarios, and scales to enable an accurate assessment of related uncertainties. The CGRA is also designed to provide the outputs necessary to feed into integrated assessment models (IAMs), nutrition and food security assessments, nitrogen and carbon cycle models, and additional impact-sector assessments (e.g., water resources, land-use, biomes, urban areas). This presentation will describe the current status of CGRA planning and initial prototype experiments to demonstrate key aspects of the protocols before wider implementation ahead of the IPCC Sixth Assessment Report.
Improved prediction of antibody VL–VH orientation
Marze, Nicholas A.; Lyskov, Sergey; Gray, Jeffrey J.
2016-01-01
Antibodies are important immune molecules with high commercial value and therapeutic interest because of their ability to bind diverse antigens. Computational prediction of antibody structure can quickly reveal valuable information about the nature of these antigen-binding interactions, but only if the models are of sufficient quality. To achieve high model quality during complementarity-determining region (CDR) structural prediction, one must account for the VL–VH orientation. We developed a novel four-metric VL–VH orientation coordinate frame. Additionally, we extended the CDR grafting protocol in RosettaAntibody with a new method that diversifies VL–VH orientation by using 10 VL–VH orientation templates rather than a single one. We tested the multiple-template grafting protocol on two datasets of known antibody crystal structures. During the template-grafting phase, the new protocol improved the fraction of accurate VL–VH orientation predictions from only 26% (12/46) to 72% (33/46) of targets. After the full RosettaAntibody protocol, including CDR H3 remodeling and VL–VH re-orientation, the new protocol produced more candidate structures with accurate VL–VH orientation than the standard protocol in 43/46 targets (93%). The improved ability to predict VL–VH orientation will bolster predictions of other parts of the paratope, including the conformation of CDR H3, a grand challenge of antibody homology modeling. PMID:27276984
William H. McWilliams; Charles D. Canham; Randall S. Morin; Katherine Johnson; Paul Roth; James A. Westfall
2012-01-01
The Forest Inventory and Analysis Program of the Northern Research Station (NRS-FIA) has implemented new Advance Tree Seedling Regeneration (ATSR) protocols that include measurements of seedlings down to 2 inches in height. The addition of ATSR protocols is part of an evaluation of NRS-FIA Phase 3 indicator variables to increase sampling intensity from 1/96,000 acres...
2018-03-13
all information . Use additional pages if necessary.) PROTOCOL #: FDG20160012A DATE: 13 March 2018 PROTOCOL TITLE: Accelerating Coagulation...Investigator Attachments: Attachment 1: Defense Technical Information Center (DTIC) Abstract Submission (Mandatory) 4 FDG20160012A...Attachment 1 Defense Technical Information Center (DTIC) Abstract Submission This abstract requires a brief (no more than 200 words) factual summary of the
2018-03-09
all information . Use additional pages if necessary.) PROTOCOL #: FDG20170005A DATE: 9 March 2018 PROTOCOL TITLE: Determining...Investigator Attachments: Attachment 1: Defense Technical Information Center (DTIC) Abstract Submission (Mandatory) 4 FDG20170005A...Attachment 1 Defense Technical Information Center (DTIC) Abstract Submission This abstract requires a brief (no more than 200 words) factual summary of the
Scheid, Lisa-Mareike; Weber, Cornelia; Bopp, Nasrin; Mosqueira, Matias; Fink, Rainer H. A.
2017-01-01
The in vitro motility assay (IVMA) is a technique that enables the measurement of the interaction between actin and myosin providing a relatively simple model to understand the mechanical muscle function. For actin-myosin IVMA, myosin is immobilized in a measurement chamber, where it converts chemical energy provided by ATP hydrolysis into mechanical energy. The result is the movement of fluorescently labeled actin filaments that can be recorded microscopically and analyzed quantitatively. Resulting sliding speeds and patterns help to characterize the underlying actin-myosin interaction that can be affected by different factors such as mutations or active compounds. Additionally, modulatory actions of the regulatory proteins tropomyosin and troponin in the presence of calcium on actin-myosin interaction can be studied with the IVMA. Zebrafish is considered a suitable model organism for cardiovascular and skeletal muscle research. In this context, straightforward protocols for the isolation and use of zebrafish muscle proteins in the IVMA would provide a useful tool in molecular studies. Currently, there are no protocols available for the mentioned purpose. Therefore, we developed fast and easy protocols for characterization of zebrafish proteins in the IVMA. Our protocols enable the interested researcher to (i) isolate actin from zebrafish skeletal muscle and (ii) extract functionally intact myosin from cardiac and skeletal muscle of individual adult zebrafish. Zebrafish tail muscle actin is isolated after acetone powder preparation, polymerized, and labeled with Rhodamine-Phalloidin. Myosin from ventricles of adult zebrafish is extracted directly into IVMA flow-cells. The same extraction protocol is applicable for comparably small tissue pieces as from zebrafish tail, mouse and frog muscle. After addition of the fluorescently labeled F-actin from zebrafish—or other origin—and ATP, sliding movement can be visualized using a fluorescence microscope and an intensified CCD camera. Taken together, we introduce a method for functional analysis in zebrafish cardiac and skeletal muscle research to study mutations at the molecular level of thick or thin filament proteins. Additionally, preliminary data indicate the usefulness of the presented method to perform the IVMA with myosin extracted from muscles of other animal models. PMID:28620318
40 CFR Appendix W to Part 51 - Guideline on Air Quality Models
Code of Federal Regulations, 2013 CFR
2013-07-01
... in the Guideline. The third activity is the extensive on-going research efforts by EPA and others in... addition, findings from ongoing research programs, new model development, or results from model evaluations... shown that the model is not biased toward underestimates; and v. A protocol on methods and procedures to...
40 CFR Appendix W to Part 51 - Guideline on Air Quality Models
Code of Federal Regulations, 2012 CFR
2012-07-01
... in the Guideline. The third activity is the extensive on-going research efforts by EPA and others in... addition, findings from ongoing research programs, new model development, or results from model evaluations... shown that the model is not biased toward underestimates; and v. A protocol on methods and procedures to...
40 CFR Appendix W to Part 51 - Guideline on Air Quality Models
Code of Federal Regulations, 2014 CFR
2014-07-01
... in the Guideline. The third activity is the extensive on-going research efforts by EPA and others in... addition, findings from ongoing research programs, new model development, or results from model evaluations... shown that the model is not biased toward underestimates; and v. A protocol on methods and procedures to...
Dendritic Immunotherapy Improvement for an Optimal Control Murine Model
Chimal-Eguía, J. C.; Castillo-Montiel, E.
2017-01-01
Therapeutic protocols in immunotherapy are usually proposed following the intuition and experience of the therapist. In order to deduce such protocols mathematical modeling, optimal control and simulations are used instead of the therapist's experience. Clinical efficacy of dendritic cell (DC) vaccines to cancer treatment is still unclear, since dendritic cells face several obstacles in the host environment, such as immunosuppression and poor transference to the lymph nodes reducing the vaccine effect. In view of that, we have created a mathematical murine model to measure the effects of dendritic cell injections admitting such obstacles. In addition, the model considers a therapy given by bolus injections of small duration as opposed to a continual dose. Doses timing defines the therapeutic protocols, which in turn are improved to minimize the tumor mass by an optimal control algorithm. We intend to supplement therapist's experience and intuition in the protocol's implementation. Experimental results made on mice infected with melanoma with and without therapy agree with the model. It is shown that the dendritic cells' percentage that manages to reach the lymph nodes has a crucial impact on the therapy outcome. This suggests that efforts in finding better methods to deliver DC vaccines should be pursued. PMID:28912828
Shestov, Alexander A.; Valette, Julien; Deelchand, Dinesh K.; Uğurbil, Kâmil; Henry, Pierre-Gilles
2016-01-01
Metabolic modeling of dynamic 13C labeling curves during infusion of 13C-labeled substrates allows quantitative measurements of metabolic rates in vivo. However metabolic modeling studies performed in the brain to date have only modeled time courses of total isotopic enrichment at individual carbon positions (positional enrichments), not taking advantage of the additional dynamic 13C isotopomer information available from fine-structure multiplets in 13C spectra. Here we introduce a new 13C metabolic modeling approach using the concept of bonded cumulative isotopomers, or bonded cumomers. The direct relationship between bonded cumomers and 13C multiplets enables fitting of the dynamic multiplet data. The potential of this new approach is demonstrated using Monte-Carlo simulations with a brain two-compartment neuronal-glial model. The precision of positional and cumomer approaches are compared for two different metabolic models (with and without glutamine dilution) and for different infusion protocols ([1,6-13C2]glucose, [1,2-13C2]acetate, and double infusion [1,6-13C2]glucose + [1,2-13C2]acetate). In all cases, the bonded cumomer approach gives better precision than the positional approach. In addition, of the three different infusion protocols considered here, the double infusion protocol combined with dynamic bonded cumomer modeling appears the most robust for precise determination of all fluxes in the model. The concepts and simulations introduced in the present study set the foundation for taking full advantage of the available dynamic 13C multiplet data in metabolic modeling. PMID:22528840
Comparison of protocols measuring diffusion and partition coefficients in the stratum corneum
Rothe, H.; Obringer, C.; Manwaring, J.; Avci, C.; Wargniez, W.; Eilstein, J.; Hewitt, N.; Cubberley, R.; Duplan, H.; Lange, D.; Jacques‐Jamin, C.; Klaric, M.; Schepky, A.
2017-01-01
Abstract Partition (K) and diffusion (D) coefficients are important to measure for the modelling of skin penetration of chemicals through the stratum corneum (SC). We compared the feasibility of three protocols for the testing of 50 chemicals in our main studies, using three cosmetics‐relevant model chemicals with a wide range of logP values. Protocol 1: SC concentration‐depth profile using tape‐stripping (measures KSC/v and DSC/HSC 2, where HSC is the SC thickness); Protocol 2A: incubation of isolated SC with chemical (direct measurement of KSC/v only) and Protocol 2B: diffusion through isolated SC mounted on a Franz cell (measures KSC/v and DSC/HSC 2, and is based on Fick's laws). KSC/v values for caffeine and resorcinol using Protocol 1 and 2B were within 30% of each other, values using Protocol 2A were ~two‐fold higher, and all values were within 10‐fold of each other. Only indirect determination of KSC/v by Protocol 2B was different from the direct measurement of KSC/v by Protocol 2A and Protocol 1 for 7‐EC. The variability of KSC/v for all three chemicals using Protocol 2B was higher compared to Protocol 1 and 2A. DSC/HSC 2 values for the three chemicals were of the same order of magnitude using all three protocols. Additionally, using Protocol 1, there was very little difference between parameters measured in pig and human SC. In conclusion, KSC/v, and DSC values were comparable using different methods. Pig skin might be a good surrogate for human skin for the three chemicals tested. Copyright © 2017 The Authors Journal of Applied Toxicology published by John Wiley & Sons Ltd. PMID:28139006
Iyama, Yuji; Nakaura, Takeshi; Yokoyama, Koichi; Kidoh, Masafumi; Harada, Kazunori; Oda, Seitaro; Tokuyasu, Shinichi; Yamashita, Yasuyuki
This study aimed to evaluate the feasibility of a low contrast, low-radiation dose protocol of 80-peak kilovoltage (kVp) with prospective electrocardiography-gated cardiac computed tomography (CT) using knowledge-based iterative model reconstruction (IMR). Thirty patients underwent an 80-kVp prospective electrocardiography-gated cardiac CT with low-contrast agent (222-mg iodine per kilogram of body weight) dose. We also enrolled 30 consecutive patients who were scanned with a 120-kVp cardiac CT with filtered back projection using the standard contrast agent dose (370-mg iodine per kilogram of body weight) as a historical control group. We evaluated the radiation dose for the 2 groups. The 80-kVp images were reconstructed with filtered back projection (protocol A), hybrid iterative reconstruction (HIR, protocol B), and IMR (protocol C). We compared CT numbers, image noise, and contrast-to-noise ratio among 120-kVp protocol, protocol A, protocol B, and protocol C. In addition, we compared the noise reduction rate between HIR and IMR. Two independent readers compared image contrast, image noise, image sharpness, unfamiliar image texture, and overall image quality among the 4 protocols. The estimated effective dose (ED) of the 80-kVp protocol was 74% lower than that of the 120-kVp protocol (1.4 vs 5.4 mSv). The contrast-to-noise ratio of protocol C was significantly higher than that of protocol A. The noise reduction rate of IMR was significantly higher than that of HIR (P < 0.01). There was no significant difference in almost all qualitative image quality between 120-kVp protocol and protocol C except for image contrast. A 80-kVp protocol with IMR yields higher image quality with 74% decreased radiation dose and 40% decreased contrast agent dose as compared with a 120-kVp protocol, while decreasing more image noise compared with the 80-kVp protocol with HIR.
Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig
2018-06-16
To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.
A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2011-01-01
This paper presents a self-stabilizing distributed clock synchronization protocol in the absence of faults in the system. It is focused on the distributed clock synchronization of an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. We present an outline of a deductive proof of the correctness of the protocol. A bounded model of the protocol was mechanically verified for a variety of topologies. Results of the mechanical proof of the correctness of the protocol are provided. The model checking results have verified the correctness of the protocol as they apply to the networks with unidirectional and bidirectional links. In addition, the results confirm the claims of determinism and linear convergence. As a result, we conjecture that the protocol solves the general case of this problem. We also present several variations of the protocol and discuss that this synchronization protocol is indeed an emergent system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uribe, Eva C; Sandoval, M Analisa; Sandoval, Marisa N
2009-01-01
With the 6 January 2009 entry into force of the Additional Protocol by the United States of America, all five declared Nuclear Weapon States that are part of the Nonproliferation Treaty have signed, ratified, and put into force the Additional Protocol. This paper makes a comparison of the strengths and weaknesses of the five Additional Protocols in force by the five Nuclear Weapon States with respect to the benefits to international nonproliferation aims. This paper also documents the added safeguards burden to the five declared Nuclear Weapon States that these Additional Protocols put on the states with respect to accessmore » to their civilian nuclear programs and the hosting of complementary access activities as part of the Additional Protocol.« less
2015-02-06
additional pages if necessary.) PROTOCOL#: FDG20140008A DATE: 6 February 2015 PROTOCOL TITLE: A Pilot Study of Common Bile Duct Reconstruction with...obstruction or bile peritonitis. This was reported to the IACUC chair. 9. REDUCTION, REFINEMENT, OR REPLACEMENT OF ANIMAL USE; REPLACEMENT...benefit the DoD/USAF? We developed a porcine model of common bile duct injury and interposition grafting, gained experience managing these patients
Kijowski, Richard; Blankenbaker, Donna G; Munoz Del Rio, Alejandro; Baer, Geoffrey S; Graf, Ben K
2013-05-01
To determine whether the addition of a T2 mapping sequence to a routine magnetic resonance (MR) imaging protocol could improve diagnostic performance in the detection of surgically confirmed cartilage lesions within the knee joint at 3.0 T. This prospective study was approved by the institutional review board, and the requirement to obtain informed consent was waived. The study group consisted of 150 patients (76 male and 74 female patients with an average age of 41.2 and 41.5 years, respectively) who underwent MR imaging and arthroscopy of the knee joint. MR imaging was performed at 3.0 T by using a routine protocol with the addition of a sagittal T2 mapping sequence. Images from all MR examinations were reviewed in consensus by two radiologists before surgery to determine the presence or absence of cartilage lesions on each articular surface, first by using the routine MR protocol alone and then by using the routine MR protocol with T2 maps. Each articular surface was then evaluated at arthroscopy. Generalized estimating equation models were used to compare the sensitivity and specificity of the routine MR imaging protocol with and without T2 maps in the detection of surgically confirmed cartilage lesions. The sensitivity and specificity in the detection of 351 cartilage lesions were 74.6% and 97.8%, respectively, for the routine MR protocol alone and 88.9% and 93.1% for the routine MR protocol with T2 maps. Differences in sensitivity and specificity were statistically significant (P < .001). The addition of T2 maps to the routine MR imaging protocol significantly improved the sensitivity in the detection of 24 areas of cartilage softening (from 4.2% to 62%, P < .001), 41 areas of cartilage fibrillation (from 20% to 66%, P < .001), and 96 superficial partial-thickness cartilage defects (from 71% to 88%, P = .004). The addition of a T2 mapping sequence to a routine MR protocol at 3.0 T improved sensitivity in the detection of cartilage lesions within the knee joint from 74.6% to 88.9%, with only a small reduction in specificity. The greatest improvement in sensitivity with use of the T2 maps was in the identification of early cartilage degeneration. © RSNA, 2013.
Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de
2016-01-01
Exner ( 1989 ) and Weiner ( 2003 ) identified 3 types of Rorschach codes that are most likely to contain personally relevant projective material: Distortions, Movement, and Embellishments. We examine how often these types of codes occur in normative data and whether their frequency changes for the 1st, 2nd, 3rd, 4th, or last response to a card. We also examine the impact on these variables of the Rorschach Performance Assessment System's (R-PAS) statistical modeling procedures that convert the distribution of responses (R) from Comprehensive System (CS) administered protocols to match the distribution of R found in protocols obtained using R-optimized administration guidelines. In 2 normative reference databases, the results indicated that about 40% of responses (M = 39.25) have 1 type of code, 15% have 2 types, and 1.5% have all 3 types, with frequencies not changing by response number. In addition, there were no mean differences in the original CS and R-optimized modeled records (M Cohen's d = -0.04 in both databases). When considered alongside findings showing minimal differences between the protocols of people randomly assigned to CS or R-optimized administration, the data suggest R-optimized administration should not alter the extent to which potential projective material is present in a Rorschach protocol.
Feine, Ilan; Shpitzen, Moshe; Geller, Boris; Salmon, Eran; Peleg, Tsach; Roth, Jonathan; Gafny, Ron
2017-07-01
Electrical tapes (ETs) are a common component of improvised explosive devices (IEDs) used by terrorists or criminal organizations and represent a valuable forensic resource for DNA and latent fingerprints recovery. However, DNA recovery rates are typically low and usually below the minimal amount required for amplification. In addition, most DNA extraction methods are destructive and do not allow further latent fingerprints development. In the present study a cell culture based touch DNA model was used to demonstrate a two-step acetone-water DNA recovery protocol from ETs. This protocol involves only the adhesive side of the ET and increases DNA recovery rates by up to 70%. In addition, we demonstrated partially successful latent fingerprints development from the non-sticky side of the ETs. Taken together, this protocol maximizes the forensic examination of ETs and is recommended for routine casework processing. Copyright © 2017 Elsevier B.V. All rights reserved.
Targeting War-Sustaining Capability at Sea: Compatibility with Additional Protocol I
2009-07-01
however, it is an aspirational “model code ” proposed by a non- governmental organization, not a binding international agreement. See SAN REMO MANUAL...limits of the law of war at sea. Moreover, the Nuremburg court’s decision to convict Dönitz of violating international law but not to inflict... Nuremburg charged Dönitz with waging “unrestricted submarine warfare contrary to the Naval Protocol of 1936 . . . which reaffirmed the rules of submarine
A Protocol for Generating and Exchanging (Genome-Scale) Metabolic Resource Allocation Models.
Reimers, Alexandra-M; Lindhorst, Henning; Waldherr, Steffen
2017-09-06
In this article, we present a protocol for generating a complete (genome-scale) metabolic resource allocation model, as well as a proposal for how to represent such models in the systems biology markup language (SBML). Such models are used to investigate enzyme levels and achievable growth rates in large-scale metabolic networks. Although the idea of metabolic resource allocation studies has been present in the field of systems biology for some years, no guidelines for generating such a model have been published up to now. This paper presents step-by-step instructions for building a (dynamic) resource allocation model, starting with prerequisites such as a genome-scale metabolic reconstruction, through building protein and noncatalytic biomass synthesis reactions and assigning turnover rates for each reaction. In addition, we explain how one can use SBML level 3 in combination with the flux balance constraints and our resource allocation modeling annotation to represent such models.
NASA Technical Reports Server (NTRS)
Seferian, Roland; Gehlen, Marion; Bopp, Laurent; Resplandy, Laure; Orr, James C.; Marti, Olivier; Dunne, John P.; Christian, James R.; Doney, Scott C.; Ilyina, Tatiana;
2015-01-01
During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessments model historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to- model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.
A secure distributed logistic regression protocol for the detection of rare adverse drug events
El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat
2013-01-01
Background There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. Objective To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. Methods We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. Results The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. Conclusion The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models. PMID:22871397
A secure distributed logistic regression protocol for the detection of rare adverse drug events.
El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat
2013-05-01
There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.
Comparison of protocols measuring diffusion and partition coefficients in the stratum corneum.
Rothe, H; Obringer, C; Manwaring, J; Avci, C; Wargniez, W; Eilstein, J; Hewitt, N; Cubberley, R; Duplan, H; Lange, D; Jacques-Jamin, C; Klaric, M; Schepky, A; Grégoire, S
2017-07-01
Partition (K) and diffusion (D) coefficients are important to measure for the modelling of skin penetration of chemicals through the stratum corneum (SC). We compared the feasibility of three protocols for the testing of 50 chemicals in our main studies, using three cosmetics-relevant model chemicals with a wide range of logP values. Protocol 1: SC concentration-depth profile using tape-stripping (measures K SC/v and D SC /H SC 2 , where H SC is the SC thickness); Protocol 2A: incubation of isolated SC with chemical (direct measurement of K SC/v only) and Protocol 2B: diffusion through isolated SC mounted on a Franz cell (measures K SC/v and D SC /H SC 2 , and is based on Fick's laws). K SC/v values for caffeine and resorcinol using Protocol 1 and 2B were within 30% of each other, values using Protocol 2A were ~two-fold higher, and all values were within 10-fold of each other. Only indirect determination of K SC/v by Protocol 2B was different from the direct measurement of K SC/v by Protocol 2A and Protocol 1 for 7-EC. The variability of K SC/v for all three chemicals using Protocol 2B was higher compared to Protocol 1 and 2A. D SC /H SC 2 values for the three chemicals were of the same order of magnitude using all three protocols. Additionally, using Protocol 1, there was very little difference between parameters measured in pig and human SC. In conclusion, K SC/v, and D SC values were comparable using different methods. Pig skin might be a good surrogate for human skin for the three chemicals tested. Copyright © 2017 The Authors Journal of Applied Toxicology published by John Wiley & Sons Ltd. Copyright © 2017 The Authors Journal of Applied Toxicology published by John Wiley & Sons Ltd.
An Efficient Mutual Authentication Framework for Healthcare System in Cloud Computing.
Kumar, Vinod; Jangirala, Srinivas; Ahmad, Musheer
2018-06-28
The increasing role of Telecare Medicine Information Systems (TMIS) makes its accessibility for patients to explore medical treatment, accumulate and approach medical data through internet connectivity. Security and privacy preservation is necessary for medical data of the patient in TMIS because of the very perceptive purpose. Recently, Mohit et al.'s proposed a mutual authentication protocol for TMIS in the cloud computing environment. In this work, we reviewed their protocol and found that it is not secure against stolen verifier attack, many logged in patient attack, patient anonymity, impersonation attack, and fails to protect session key. For enhancement of security level, we proposed a new mutual authentication protocol for the similar environment. The presented framework is also more capable in terms of computation cost. In addition, the security evaluation of the protocol protects resilience of all possible security attributes, and we also explored formal security evaluation based on random oracle model. The performance of the proposed protocol is much better in comparison to the existing protocol.
Ogawa, Takako; Misumi, Masahiro; Sonoike, Kintake
2017-09-01
Cyanobacteria are photosynthetic prokaryotes and widely used for photosynthetic research as model organisms. Partly due to their prokaryotic nature, however, estimation of photosynthesis by chlorophyll fluorescence measurements is sometimes problematic in cyanobacteria. For example, plastoquinone pool is reduced in the dark-acclimated samples in many cyanobacterial species so that conventional protocol developed for land plants cannot be directly applied for cyanobacteria. Even for the estimation of the simplest chlorophyll fluorescence parameter, F v /F m , some additional protocol such as addition of DCMU or illumination of weak blue light is necessary. In this review, those problems in the measurements of chlorophyll fluorescence in cyanobacteria are introduced, and solutions to those problems are given.
A Model of In vitro Plasticity at the Parallel Fiber—Molecular Layer Interneuron Synapses
Lennon, William; Yamazaki, Tadashi; Hecht-Nielsen, Robert
2015-01-01
Theoretical and computational models of the cerebellum typically focus on the role of parallel fiber (PF)—Purkinje cell (PKJ) synapses for learned behavior, but few emphasize the role of the molecular layer interneurons (MLIs)—the stellate and basket cells. A number of recent experimental results suggest the role of MLIs is more important than previous models put forth. We investigate learning at PF—MLI synapses and propose a mathematical model to describe plasticity at this synapse. We perform computer simulations with this form of learning using a spiking neuron model of the MLI and show that it reproduces six in vitro experimental results in addition to simulating four novel protocols. Further, we show how this plasticity model can predict the results of other experimental protocols that are not simulated. Finally, we hypothesize what the biological mechanisms are for changes in synaptic efficacy that embody the phenomenological model proposed here. PMID:26733856
Dewan, Vinay; Lambert, Dennis; Edler, Joshua; Kymes, Steven; Apte, Rajendra S
2012-08-01
Perform a cost-effectiveness analysis of the treatment of diabetic macular edema (DME) with ranibizumab plus prompt or deferred laser versus triamcinolone plus prompt laser. Data for the analysis were drawn from reports of the Diabetic Retinopathy Clinical Research Network (DRCRnet) Protocol I. Computer simulation based on Protocol I data. Analyses were conducted from the payor perspective. Simulated participants assigned characteristics reflecting those seen in Protocol I. Markov models were constructed to replicate Protocol I's 104-week outcomes using a microsimulation approach to estimation. Baseline characteristics, visual acuity (VA), treatments, and complications were based on Protocol I data. Costs were identified by literature search. One-way sensitivity analysis was performed, and the results were validated against Protocol I data. Direct cost of care for 2 years, change in VA from baseline, and incremental cost-effectiveness ratio (ICER) measured as cost per additional letter gained from baseline (Early Treatment of Diabetic Retinopathy Study). For sham plus laser (S+L), ranibizumab plus prompt laser (R+pL), ranibizumab plus deferred laser (R+dL), and triamcinolone plus laser (T+L), effectiveness through 104 weeks was predicted to be 3.46, 7.07, 8.63, and 2.40 letters correct, respectively. The ICER values in terms of dollars per VA letter were $393 (S+L vs. T+L), $5943 (R+pL vs. S+L), and $20 (R+dL vs. R+pL). For pseudophakics, the ICER value for comparison triamcinolone with laser versus ranibizumab with deferred laser was $14 690 per letter gained. No clinically relevant changes in model variables altered outcomes. Internal validation demonstrated good similarity to Protocol I treatment patterns. In treatment of phakic patients with DME, ranibizumab with deferred laser provided an additional 6 letters correct compared with triamcinolone with laser at an additional cost of $19 216 over 2 years. That would indicate that if the gain in VA seen at 2 years is maintained in subsequent years, then the treatment of phakic patients with DME using ranibizumab may meet accepted standards of cost-effectiveness. For pseudophakic patients, first-line treatment with triamcinolone seems to be the most cost-effective option. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Autoimmune Myocarditis, Valvulitis, and Cardiomyopathy
Myers, Jennifer M.; Cunningham, Madeleine W.; Fairweather, DeLisa; Huber, Sally A.
2013-01-01
Cardiac myosin-induced autoimmune myocarditis (EAM) is a model of inflammatory heart disease initiated by CD4+ T cells (Smith and Allen 1991; Li, Heuser et al. 2004). It is a paradigm of the immune-mediated cardiac damage believed to play a role in the pathogenesis of a subset of postinfectious human cardiomyopathies (Rose, Herskowitz et al. 1993). Myocarditis is induced in susceptible mice by immunization with purified cardiac myosin (Neu, Rose et al. 1987) or specific peptides derived from cardiac myosin (Donermeyer, Beisel et al. 1995; Pummerer, Luze et al. 1996) (see Basic Protocol 1), or by adoptive transfer of myosin-reactive T cells (Smith and Allen 1991) (see Alternate Protocol). Myocarditis has been induced in Lewis rats by immunization with purified rat or porcine cardiac myosin (Kodama, Matsumoto et al. 1990; Li, Heuser et al. 2004) (see Basic Protocol 2) or S2-16 peptide (Li, Heuser et al. 2004), or by adoptive transfer of T cells stimulated by specific peptides derived from cardiac myosin (Wegmann, Zhao et al. 1994). Myocarditis begins 12 to 14 days after the first immunization, and is maximal after 21 days. Other animal models commonly used to study myocarditis development include the pathogen-induced models in which disease is initiated by viral infection. The first murine model of acute viral myocarditis causes sudden death via viral damage to cardiomyocytes (Huber, Gauntt et al. 1998; Horwitz, La Cava et al. 2000; Fong 2003; Fuse, Chan et al. 2005; Fairweather and Rose 2007; Cihakova and Rose 2008) whereas the second model is based on inoculation with heart-passaged coxsackievirus B3 (CVB3) that includes damaged heart proteins (Fairweather, Frisancho-Kiss et al. 2004; Fairweather D 2004; Fairweather and Rose 2007; Cihakova and Rose 2008) In addition to the protocols used to induce EAM in mice and rats, support protocols are included for preparing purified cardiac myosin using mouse or rat heart tissue (see Support Protocol 1), preparing purified cardiac myosin for injection (see Support Protocol 2), and collecting and assessing hearts by histopathological means (see Support Protocol 3). PMID:23564686
48 CFR 204.470 - U.S.-International Atomic Energy Agency Additional Protocol.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false U.S.-International Atomic Energy Agency Additional Protocol. 204.470 Section 204.470 Federal Acquisition Regulations System DEFENSE... Information Within Industry 204.470 U.S.-International Atomic Energy Agency Additional Protocol. ...
48 CFR 204.470 - U.S.-International Atomic Energy Agency Additional Protocol.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false U.S.-International Atomic Energy Agency Additional Protocol. 204.470 Section 204.470 Federal Acquisition Regulations System DEFENSE... Information Within Industry 204.470 U.S.-International Atomic Energy Agency Additional Protocol. ...
48 CFR 204.470 - U.S.-International Atomic Energy Agency Additional Protocol.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false U.S.-International Atomic Energy Agency Additional Protocol. 204.470 Section 204.470 Federal Acquisition Regulations System DEFENSE... Information Within Industry 204.470 U.S.-International Atomic Energy Agency Additional Protocol. ...
48 CFR 204.470 - U.S.-International Atomic Energy Agency Additional Protocol.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false U.S.-International Atomic Energy Agency Additional Protocol. 204.470 Section 204.470 Federal Acquisition Regulations System DEFENSE... Information Within Industry 204.470 U.S.-International Atomic Energy Agency Additional Protocol. ...
48 CFR 204.470 - U.S.-International Atomic Energy Agency Additional Protocol.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false U.S.-International Atomic Energy Agency Additional Protocol. 204.470 Section 204.470 Federal Acquisition Regulations System DEFENSE... Information Within Industry 204.470 U.S.-International Atomic Energy Agency Additional Protocol. ...
15 CFR 781.2 - Purposes of the Additional Protocol and APR.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE ADDITIONAL PROTOCOL... and less any information to which the U.S. Government applies the national security exclusion, is... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Purposes of the Additional Protocol...
A four-phase strategy for the implementation of reflectance confocal microscopy in dermatology.
Hoogedoorn, L; Gerritsen, M J P; Wolberink, E A W; Peppelman, M; van de Kerkhof, P C M; van Erp, P E J
2016-08-01
Reflectance confocal microscopy (RCM) is gradually implemented in dermatology. Strategies for further implementation and practical 'hands on' guidelines are lacking. The primary outcome was to conduct a general strategy for further implementation of RCM. The secondary outcome was the diagnosis of psoriasis and differentiation of stable from unstable psoriatic plaques by means of the 'hands on' protocol, derived from the strategy. We used a four-phased model; an exploring phase, a systematic literature search, a clinical approach and, finally, an integration phase to develop a clinical guideline for RCM in psoriasis. Receiver operating characteristic curve statistics was applied to define the accuracy for the diagnosis of unstable psoriasis. A general strategy for further implementation of RCM and practical approach was developed to examine psoriasis by RCM and to distinguish stable from unstable psoriasis. Unstable psoriasis was diagnosed by epidermal inflammatory cell counts with a sensitivity and specificity of 91.7% and 98.3%, respectively, and with an accuracy of 0.92 (area under the curve). In addition, a monitoring model was proposed. This is the first study that shows a method for implementation of RCM in dermatology. The strategy and hands on protocol for psoriasis may serve as a model for other dermatological entities and additionally may lead to specialized ready-to-use RCM protocols for clinical dermatological practice. © 2016 European Academy of Dermatology and Venereology.
Dose uniformity analysis among ten 16-slice same-model CT scanners.
Erdi, Yusuf Emre
2012-01-01
With the introduction of multislice scanners, computed tomographic (CT) dose optimization has become important. The patient-absorbed dose may differ among the scanners although they are the same type and model. To investigate the dose output variation of the CT scanners, we designed the study to analyze dose outputs of 10 same-model CT scanners using 3 clinical protocols. Ten GE Lightspeed (GE Healthcare, Waukesha, Wis) 16-slice scanners located at main campus and various satellite locations of our institution have been included in this study. All dose measurements were performed using poly (methyl methacrylate) (PMMA) head (diameter, 16 cm) and body (diameter, 32 cm) phantoms manufactured by Radcal (RadCal Corp, Monrovia, Calif) using a 9095 multipurpose analyzer with 10 × 9-3CT ion chamber both from the same manufacturer. Ion chamber is inserted into the peripheral and central axis locations and volume CT dose index (CTDIvol) is calculated as weighted average of doses at those locations. Three clinical protocol settings for adult head, high-resolution chest, and adult abdomen are used for dose measurements. We have observed up to 9.4% CTDIvol variation for the adult head protocol in which the largest variation occurred among the protocols. However, head protocol uses higher milliampere second values than the other 2 protocols. Most of the measured values were less than the system-stored CTDIvol values. It is important to note that reduction in dose output from tubes as they age is expected in addition to the intrinsic radiation output fluctuations of the same scanner. Although the same model CT scanners were used in this study, it is possible to see CTDIvol variation in standard patient scanning protocols of head, chest, and abdomen. The compound effect of the dose variation may be larger with higher milliampere and multiphase and multilocation CT scans.
Reconstitution of mouse oogenesis in a dish from pluripotent stem cells.
Hayashi, Katsuhiko; Hikabe, Orie; Obata, Yayoi; Hirao, Yuji
2017-09-01
This protocol is an extension to: Nat. Protoc. 8, 1513-1524 (2013); doi: 10.1038/nprot.2013.090; published online 11 July 2013Generation of functional oocytes in culture from pluripotent stem cells should provide a useful model system for improving our understanding of the basic mechanisms underlying oogenesis. In addition, it has potential applications as an alternative source of oocytes for reproduction. Using the most advanced mouse model in regard to reproductive engineering and stem cell biology, we previously developed a culture method that produces functional primorial germ cells starting from pluripotent cells in culture and described it in a previous protocol. This Protocol Extension describes an adaptation of this existing Protocol in which oogenesis also occurs in vitro, thus substantially modifying the technique. Oocytes generated from embryonic stem cells (ESCs) or induced pluripotent stem cells give rise to healthy pups. Here, we describe the protocol for oocyte generation in culture. The protocol is mainly composed of three different culture stages: in vitro differentiation (IVDi), in vitro growth (IVG), and in vitro maturation (IVM), which in total take ∼5 weeks. In each culture period, there are several checkpoints that enable the number of oocytes being produced in the culture to be monitored. The basic structure of the culture system should provide a useful tool for clarifying the complicated sequence of oogenesis in mammals.
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...
2016-05-03
We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Atomic Energy Agency Additional Protocol. 252.204-7010 Section 252.204-7010 Federal Acquisition... Atomic Energy Agency Additional Protocol. As prescribed in 204.470-3, use the following clause....-International Atomic Energy Agency Additional Protocol (JAN 2009) (a) If the Contractor is required to report...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Atomic Energy Agency Additional Protocol. 252.204-7010 Section 252.204-7010 Federal Acquisition... Atomic Energy Agency Additional Protocol. As prescribed in 204.470-3, use the following clause....-International Atomic Energy Agency Additional Protocol (JAN 2009) (a) If the Contractor is required to report...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Atomic Energy Agency Additional Protocol. 252.204-7010 Section 252.204-7010 Federal Acquisition... Atomic Energy Agency Additional Protocol. As prescribed in 204.470-3, use the following clause....-International Atomic Energy Agency Additional Protocol (JAN 2009) (a) If the Contractor is required to report...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Atomic Energy Agency Additional Protocol. 252.204-7010 Section 252.204-7010 Federal Acquisition... Atomic Energy Agency Additional Protocol. As prescribed in 204.470-3, use the following clause....-International Atomic Energy Agency Additional Protocol (JAN 2009) (a) If the Contractor is required to report...
Cognitive Hypnotherapy as a Transdiagnostic Protocol for Emotional Disorders.
Alladin, Assen; Amundson, Jon
2016-01-01
This article describes cognitive hypnotherapy (CH), an integrative treatment that provides an evidence-based framework for synthesizing clinical practice and research. CH combines hypnotherapy with cognitive-behavior therapy in the management of emotional disorders. This blended version of clinical practice meets criteria for an assimilative model of integrative psychotherapy, which incorporates both theory and empirical findings. Issues related to (a) additive effect of hypnosis in treatment, (b) transdiagnostic consideration, and (c) unified treatment protocols in the treatment of emotional disorders are considered in light of cognitive hypnotherapy.
Imaging Tumor Cell Movement In Vivo
Entenberg, David; Kedrin, Dmitriy; Wyckoff, Jeffrey; Sahai, Erik; Condeelis, John; Segall, Jeffrey E.
2013-01-01
This unit describes the methods that we have been developing for analyzing tumor cell motility in mouse and rat models of breast cancer metastasis. Rodents are commonly used both to provide a mammalian system for studying human tumor cells (as xenografts in immunocompromised mice) as well as for following the development of tumors from a specific tissue type in transgenic lines. The Basic Protocol in this unit describes the standard methods used for generation of mammary tumors and imaging them. Additional protocols for labeling macrophages, blood vessel imaging, and image analysis are also included. PMID:23456602
Dynamic gastric digestion of a commercial whey protein concentrate†.
Miralles, Beatriz; Del Barrio, Roberto; Cueva, Carolina; Recio, Isidra; Amigo, Lourdes
2018-03-01
A dynamic gastrointestinal simulator, simgi ® , has been applied to assess the gastric digestion of a whey protein concentrate. Samples collected from the outlet of the stomach have been compared to those resulting from the static digestion protocol INFOGEST developed on the basis of physiologically inferred conditions. Progress of digestion was followed by SDS-PAGE and LC-MS/MS. By SDS-PAGE, serum albumin and α-lactalbumin were no longer detectable at 30 and 60 min, respectively. On the contrary, β-lactoglobulin was visible up to 120 min, although in decreasing concentrations in the dynamic model due to the gastric emptying and the addition of gastric fluids. Moreover, β-lactoglobulin was partly hydrolysed by pepsin probably due to the presence of heat-denatured forms and the peptides released using both digestion models were similar. Under dynamic conditions, a stepwise increase in number of peptides over time was observed, while the static protocol generated a high number of peptides from the beginning of digestion. Whey protein digestion products using a dynamic stomach are consistent with those generated with the static protocol but the kinetic behaviour of the peptide profile emphasises the effect of the sequential pepsin addition, peristaltic shaking, and gastric emptying on protein digestibility. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Efficient model checking of network authentication protocol based on SPIN
NASA Astrophysics Data System (ADS)
Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan
2013-03-01
Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.
Assessing Sociability, Social Memory, and Pup Retrieval in Mice.
Zimprich, Annemarie; Niessing, Jörn; Cohen, Lior; Garrett, Lillian; Einicke, Jan; Sperling, Bettina; Schmidt, Mathias V; Hölter, Sabine M
2017-12-20
Adaptive social behavior is important in mammals, both for the well-being of the individual and for the thriving of the species. Dysfunctions in social behavior occur in many neurodevelopmental and psychiatric diseases, and research into the genetic components of disease-relevant social deficits can open up new avenues for understanding the underlying biological mechanisms and therapeutic interventions. Genetically modified mouse models are particularly useful in this respect, and robust experimental protocols are needed to reliably assess relevant social behavior phenotypes. Here we describe in detail three protocols to quantitatively measure sociability, one of the most frequently investigated social behavior phenotypes in mice, using a three-chamber sociability test. These protocols can be extended to also assess social memory. In addition, we provide a detailed protocol on pup retrieval, which is a particularly robust maternal behavior amenable to various scientific questions. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Sullivan, Con; Jurcyzszak, Denise; Goody, Michelle F; Gabor, Kristin A; Longfellow, Jacob R; Millard, Paul J; Kim, Carol H
2017-01-20
Each year, seasonal influenza outbreaks profoundly affect societies worldwide. In spite of global efforts, influenza remains an intractable healthcare burden. The principle strategy to curtail infections is yearly vaccination. In individuals who have contracted influenza, antiviral drugs can mitigate symptoms. There is a clear and unmet need to develop alternative strategies to combat influenza. Several animal models have been created to model host-influenza interactions. Here, protocols for generating zebrafish models for systemic and localized human influenza A virus (IAV) infection are described. Using a systemic IAV infection model, small molecules with potential antiviral activity can be screened. As a proof-of-principle, a protocol that demonstrates the efficacy of the antiviral drug Zanamivir in IAV-infected zebrafish is described. It shows how disease phenotypes can be quantified to score the relative efficacy of potential antivirals in IAV-infected zebrafish. In recent years, there has been increased appreciation for the critical role neutrophils play in the human host response to influenza infection. The zebrafish has proven to be an indispensable model for the study of neutrophil biology, with direct impacts on human medicine. A protocol to generate a localized IAV infection in the Tg(mpx:mCherry) zebrafish line to study neutrophil biology in the context of a localized viral infection is described. Neutrophil recruitment to localized infection sites provides an additional quantifiable phenotype for assessing experimental manipulations that may have therapeutic applications. Both zebrafish protocols described faithfully recapitulate aspects of human IAV infection. The zebrafish model possesses numerous inherent advantages, including high fecundity, optical clarity, amenability to drug screening, and availability of transgenic lines, including those in which immune cells such as neutrophils are labeled with fluorescent proteins. The protocols detailed here exploit these advantages and have the potential to reveal critical insights into host-IAV interactions that may ultimately translate into the clinic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, O; Winslow, J; Samei, E
2014-06-15
Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using opticalmore » character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image quality across CT vendors.« less
Marcuzzo, Stefania; Bonanno, Silvia; Padelli, Francesco; Moreno-Manzano, Victoria; García-Verdugo, José Manuel; Bernasconi, Pia; Mantegazza, Renato; Bruzzone, Maria Grazia; Zucca, Ileana
2016-01-01
Diffusion-weighted Magnetic Resonance Imaging (dMRI) has relevant applications in the microstructural characterization of the spinal cord, especially in neurodegenerative diseases. Animal models have a pivotal role in the study of such diseases; however, in vivo spinal dMRI of small animals entails additional challenges that require a systematical investigation of acquisition parameters. The purpose of this study is to compare three acquisition protocols and identify the scanning parameters allowing a robust estimation of the main diffusion quantities and a good sensitivity to neurodegeneration in the mouse spinal cord. For all the protocols, the signal-to-noise and contrast-to noise ratios and the mean value and variability of Diffusion Tensor metrics were evaluated in healthy controls. For the estimation of fractional anisotropy less variability was provided by protocols with more diffusion directions, for the estimation of mean, axial and radial diffusivity by protocols with fewer diffusion directions and higher diffusion weighting. Intermediate features (12 directions, b = 1200 s/mm2) provided the overall minimum inter- and intra-subject variability in most cases. In order to test the diagnostic sensitivity of the protocols, 7 G93A-SOD1 mice (model of amyotrophic lateral sclerosis) at 10 and 17 weeks of age were scanned and the derived diffusion parameters compared with those estimated in age-matched healthy animals. The protocols with an intermediate or high number of diffusion directions provided the best differentiation between the two groups at week 17, whereas only few local significant differences were highlighted at week 10. According to our results, a dMRI protocol with an intermediate number of diffusion gradient directions and a relatively high diffusion weighting is optimal for spinal cord imaging. Further work is needed to confirm these results and for a finer tuning of acquisition parameters. Nevertheless, our findings could be important for the optimization of acquisition protocols for preclinical and clinical dMRI studies on the spinal cord. PMID:27560686
Analysis of translation using polysome profiling
Chassé, Héloïse; Boulben, Sandrine; Costache, Vlad; Cormier, Patrick
2017-01-01
Abstract During the past decade, there has been growing interest in the role of translational regulation of gene expression in many organisms. Polysome profiling has been developed to infer the translational status of a specific mRNA species or to analyze the translatome, i.e. the subset of mRNAs actively translated in a cell. Polysome profiling is especially suitable for emergent model organisms for which genomic data are limited. In this paper, we describe an optimized protocol for the purification of sea urchin polysomes and highlight the critical steps involved in polysome purification. We applied this protocol to obtain experimental results on translational regulation of mRNAs following fertilization. Our protocol should prove useful for integrating the study of the role of translational regulation in gene regulatory networks in any biological model. In addition, we demonstrate how to carry out high-throughput processing of polysome gradient fractions, for the simultaneous screening of multiple biological conditions and large-scale preparation of samples for next-generation sequencing. PMID:28180329
Physical layer simulation study for the coexistence of WLAN standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howlader, M. K.; Keiger, C.; Ewing, P. D.
This paper presents the results of a study on the performance of wireless local area network (WLAN) devices in the presence of interference from other wireless devices. To understand the coexistence of these wireless protocols, simplified physical-layer-system models were developed for the Bluetooth, Wireless Fidelity (WiFi), and Zigbee devices, all of which operate within the 2.4-GHz frequency band. The performances of these protocols were evaluated using Monte-Carlo simulations under various interference and channel conditions. The channel models considered were basic additive white Gaussian noise (AWGN), Rayleigh fading, and site-specific fading. The study also incorporated the basic modulation schemes, multiple accessmore » techniques, and channel allocations of the three protocols. This research is helping the U.S. Nuclear Regulatory Commission (NRC) understand the coexistence issues associated with deploying wireless devices and could prove useful in the development of a technical basis for guidance to address safety-related issues with the implementation of wireless systems in nuclear facilities. (authors)« less
Virtual memory support for distributed computing environments using a shared data object model
NASA Astrophysics Data System (ADS)
Huang, F.; Bacon, J.; Mapp, G.
1995-12-01
Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.
Development, implementation, and experimentation of parametric routing protocol for sensor networks
NASA Astrophysics Data System (ADS)
Nassr, Matthew S.; Jun, Jangeun; Eidenbenz, Stephan J.; Frigo, Janette R.; Hansson, Anders A.; Mielke, Angela M.; Smith, Mark C.
2006-09-01
The development of a scalable and reliable routing protocol for sensor networks is traced from a theoretical beginning to positive simulation results to the end of verification experiments in large and heavily loaded networks. Design decisions and explanations as well as implementation hurdles are presented to give a complete picture of protocol development. Additional software and hardware is required to accurately test the performance of our protocol in field experiments. In addition, the developed protocol is tested in TinyOS on Mica2 motes against well-established routing protocols frequently used in sensor networks. Our protocol proves to outperform the standard (MINTRoute) and the trivial (Gossip) in a variety of different scenarios.
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
The Phyre2 web portal for protein modelling, prediction and analysis
Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE
2017-01-01
Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237
Borge, Rafael; Santiago, Jose Luis; de la Paz, David; Martín, Fernando; Domingo, Jessica; Valdés, Cristina; Sánchez, Beatriz; Rivas, Esther; Rozas, Mª Teresa; Lázaro, Sonia; Pérez, Javier; Fernández, Álvaro
2018-05-05
Air pollution continues to be one of the main issues in urban areas. In addition to air quality plans and emission abatement policies, additional measures for high pollution episodes are needed to avoid exceedances of hourly limit values under unfavourable meteorological conditions such as the Madrid's short-term action NO 2 protocol. In December 2016 there was a strong atmospheric stability episode that turned out in generalized high NO 2 levels, causing the stage 3 of the NO 2 protocol to be triggered for the first time in Madrid (29th December). In addition to other traffic-related measures, this involves access restrictions to the city centre (50% to private cars). We simulated the episode with and without measures under a multi-scale modelling approach. A 1 km 2 resolution modelling system based on WRF-SMOKE-CMAQ was applied to assess city-wide effects while the Star-CCM+ (RANS CFD model) was used to investigate the effect at street level in a microscale domain in the city centre, focusing on Gran Vía Avenue. Changes in road traffic were simulated with the mesoscale VISUM model, incorporating real flux measurements during those days. The corresponding simulations suggest that the application of the protocol during this particular episode may have prevented concentrations to increase by 24 μg·m -3 (14% respect to the hypothetical no action scenario) downtown although it may have cause NO 2 to slightly increase in the city outskirts due to traffic redistribution. Speed limitation and parking restrictions alone (stages 1 and 2 respectively) have a very limited effect. The microscale simulation provides consistent results but shows an important variability at street level, with reduction above 100 μg·m -3 in some spots inside Gran Vía. Although further research is needed, these results point out the need to implement short-term action plans and to apply a consistent multi-scale modelling assessment to optimize urban air quality abatement strategies. Copyright © 2018 Elsevier B.V. All rights reserved.
Jeziorowska, Dorota; Fontaine, Vincent; Jouve, Charlène; Villard, Eric; Dussaud, Sébastien; Akbar, David; Letang, Valérie; Cervello, Pauline; Itier, Jean-Michiel; Pruniaux, Marie-Pierre; Hulot, Jean-Sébastien
2017-01-01
Human induced pluripotent stem cells (iPSCs) represent a powerful human model to study cardiac disease in vitro, notably channelopathies and sarcomeric cardiomyopathies. Different protocols for cardiac differentiation of iPSCs have been proposed either based on embroid body formation (3D) or, more recently, on monolayer culture (2D). We performed a direct comparison of the characteristics of the derived cardiomyocytes (iPSC-CMs) on day 27 ± 2 of differentiation between 3D and 2D differentiation protocols with two different Wnt-inhibitors were compared: IWR1 (inhibitor of Wnt response) or IWP2 (inhibitor of Wnt production). We firstly found that the level of Troponin T (TNNT2) expression measured by FACS was significantly higher for both 2D protocols as compared to the 3D protocol. In the three methods, iPSC-CM show sarcomeric structures. However, iPSC-CM generated in 2D protocols constantly displayed larger sarcomere lengths as compared to the 3D protocol. In addition, mRNA and protein analyses reveal higher cTNi to ssTNi ratios in the 2D protocol using IWP2 as compared to both other protocols, indicating a higher sarcomeric maturation. Differentiation of cardiac myocytes with 2D monolayer-based protocols and the use of IWP2 allows the production of higher yield of cardiac myocytes that have more suitable characteristics to study sarcomeric cardiomyopathies. PMID:28587156
Bharate, Sonali S; Vishwakarma, Ram A
2015-04-01
An early prediction of solubility in physiological media (PBS, SGF and SIF) is useful to predict qualitatively bioavailability and absorption of lead candidates. Despite of the availability of multiple solubility estimation methods, none of the reported method involves simplified fixed protocol for diverse set of compounds. Therefore, a simple and medium-throughput solubility estimation protocol is highly desirable during lead optimization stage. The present work introduces a rapid method for assessment of thermodynamic equilibrium solubility of compounds in aqueous media using 96-well microplate. The developed protocol is straightforward to set up and takes advantage of the sensitivity of UV spectroscopy. The compound, in stock solution in methanol, is introduced in microgram quantities into microplate wells followed by drying at an ambient temperature. Microplates were shaken upon addition of test media and the supernatant was analyzed by UV method. A plot of absorbance versus concentration of a sample provides saturation point, which is thermodynamic equilibrium solubility of a sample. The established protocol was validated using a large panel of commercially available drugs and with conventional miniaturized shake flask method (r(2)>0.84). Additionally, the statistically significant QSPR models were established using experimental solubility values of 52 compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.
One Approach for Transitioning the iNET Standards into the IRIG 106 Telemetry Standards
2015-05-26
Protocol Suite. Figure 1 illustrates the Open Systems Interconnection ( OSI ) Model, the corresponding TCP/IP Model, and the major components of the TCP...IP Protocol Suite. Figure 2 represents the iNET-specific protocols layered onto the TCP/IP Model. Figure 1. OSI and TCP/IP Model with TCP/IP...Protocol Suite TCP/IP Protocol Suite Major Components IPv4 IPv6 TCP/IP Model OSI Model Application Presentation
Secure quantum private information retrieval using phase-encoded queries
NASA Astrophysics Data System (ADS)
Olejnik, Lukasz
2011-08-01
We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.100.230502 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.
Secure quantum private information retrieval using phase-encoded queries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olejnik, Lukasz
We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offersmore » substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.« less
Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.
Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A
2018-04-01
A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (<15 lux) during alertness measurements and controlling for sleep and entrainment before the start of the protocol are selected for modeling. This is done to avoid the direct alerting effects of light and effects of prior sleep debt and circadian misalignment on the data. The results show that linear combination of circadian and homeostatic drives is sufficient to predict dynamics of a variety of sleepiness and performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.
Stine, Kimo C.; Wahl, Elizabeth C.; Liu, Lichu; Skinner, Robert A.; VanderSchilden, Jaclyn; Bunn, Robert C.; Montgomery, Corey O.; Aronson, James; Becton, David L.; Nicholas, Richard W.; Swearingen, Christopher J.; Suva, Larry J.; Lumpkin, Charles K.
2017-01-01
The majority of Osteosarcoma (OS) patients are treated with a combination of chemotherapy, resection, and limb salvage protocols. These protocols include distraction osteogenesis (DO), which is characterized by direct new bone formation. Cisplatin (CDP) is extensively used for OS chemotherapy and recent studies, using a mouse DO model, have demonstrated that CDP has profound negative effects on bone repair. Recent oncological therapeutic strategies are based on the use of standard cytotoxic drugs plus an assortment of biologic agents. Here we demonstrate that the previously reported CDP-associated inhibition of bone repair can be modulated by the administration of a small molecule p53 inducer (nutlin-3). The effects of nutlin-3 on CDP osteotoxicity were studied using both pre- and post-operative treatment models. In both cases the addition of nutlin-3, bracketing CDP exposure, demonstrated robust and significant bone sparing activity (p < 0.01–0.001). In addition the combination of nutlin-3 and CDP induced equivalent OS tumor killing in a xenograft model. Collectively, these results demonstrate that the induction of p53 peri-operatively protects bone healing from the toxic effects of CDP, while maintaining OS toxicity. PMID:26867804
Yigzaw, Kassaye Yitbarek; Michalas, Antonis; Bellika, Johan Gustav
2017-01-03
Techniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step. We designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network. The security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N - 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem. The proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.
Modelling optimal location for pre-hospital helicopter emergency medical services.
Schuurman, Nadine; Bell, Nathaniel J; L'Heureux, Randy; Hameed, Syed M
2009-05-09
Increasing the range and scope of early activation/auto launch helicopter emergency medical services (HEMS) may alleviate unnecessary injury mortality that disproportionately affects rural populations. To date, attempts to develop a quantitative framework for the optimal location of HEMS facilities have been absent. Our analysis used five years of critical care data from tertiary health care facilities, spatial data on origin of transport and accurate road travel time catchments for tertiary centres. A location optimization model was developed to identify where the expansion of HEMS would cover the greatest population among those currently underserved. The protocol was developed using geographic information systems (GIS) to measure populations, distances and accessibility to services. Our model determined Royal Inland Hospital (RIH) was the optimal site for an expanded HEMS - based on denominator population, distance to services and historical usage patterns. GIS based protocols for location of emergency medical resources can provide supportive evidence for allocation decisions - especially when resources are limited. In this study, we were able to demonstrate conclusively that a logical choice exists for location of additional HEMS. This protocol could be extended to location analysis for other emergency and health services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinshaw, J. Louis, E-mail: jhinshaw@uwhealth.or; Littrup, Peter J.; Durick, Nathan
2010-12-15
The purpose of this study was to compare a double freeze-thaw protocol to a triple freeze-thaw protocol for pulmonary cryoablation utilizing an in vivo porcine lung model. A total of 18 cryoablations were performed in normal porcine lung utilizing percutaneous technique with 9 each in a double- (10-5-10) and triple-freeze (3-3-7-7-5) protocol. Serial noncontrast CT images were obtained during the ablation. CT imaging findings and pathology were reviewed. No imaging changes were identified during the initial freeze cycle with either protocol. However, during the first thaw cycle, a region of ground glass opacity developed around the probe with both protocols.more » Because the initial freeze was shorter with the triple freeze-thaw protocol, the imaging findings were apparent sooner with this protocol (6 vs. 13 min). Also, despite a shorter total freeze time (15 vs. 20 min), the ablation zone identified with the triple freeze-thaw protocol was not significantly different from the double freeze-thaw protocol (mean diameter: 1.67 {+-} 0.41 cm vs. 1.66 {+-} 0.21 cm, P = 0.77; area: 2.1 {+-} 0.48 cm{sup 2} vs. 1.99 {+-} 0.62 cm{sup 2}, P = 0.7; and circularity: 0.95 {+-} 0.04 vs. 0.96 {+-} 0.03, P = 0.62, respectively). This study suggests that there may be several advantages of a triple freeze-thaw protocol for pulmonary cryoablation, including earlier identification of the imaging findings associated with the ablation, the promise of a shorter procedure time or larger zones of ablation, and theoretically, more effective cytotoxicity related to the additional freeze-thaw cycle.« less
Simulator of Space Communication Networks
NASA Technical Reports Server (NTRS)
Clare, Loren; Jennings, Esther; Gao, Jay; Segui, John; Kwong, Winston
2005-01-01
Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) is a suite of software tools that simulates the behaviors of communication networks to be used in space exploration, and predict the performance of established and emerging space communication protocols and services. MACHETE consists of four general software systems: (1) a system for kinematic modeling of planetary and spacecraft motions; (2) a system for characterizing the engineering impact on the bandwidth and reliability of deep-space and in-situ communication links; (3) a system for generating traffic loads and modeling of protocol behaviors and state machines; and (4) a system of user-interface for performance metric visualizations. The kinematic-modeling system makes it possible to characterize space link connectivity effects, including occultations and signal losses arising from dynamic slant-range changes and antenna radiation patterns. The link-engineering system also accounts for antenna radiation patterns and other phenomena, including modulations, data rates, coding, noise, and multipath fading. The protocol system utilizes information from the kinematic-modeling and link-engineering systems to simulate operational scenarios of space missions and evaluate overall network performance. In addition, a Communications Effect Server (CES) interface for MACHETE has been developed to facilitate hybrid simulation of space communication networks with actual flight/ground software/hardware embedded in the overall system.
NASA Technical Reports Server (NTRS)
Abercromby, Andrew F. J.; Gernhardt, Michael L.; Conkin, Johnny
2013-01-01
A TBDM DCS probability model based on an existing biophysical model of inert gas bubble growth provides significant prediction and goodness-of-fit with 84 cases of DCS in 668 human altitude exposures. 2. Model predictions suggest that 15-minute O2 prebreathe protocols used in conjunction with suit ports and an 8.2 psi, 34% O2, 66% N2 atmosphere may enable rapid EVA capability for future exploration missions with the risk of DCS = 12%. ? EVA could begin immediately at 6.0 psi, with crewmembers decreasing suit pressure to 4.3 psi after completing the 15-minute in-suit prebreathe. 3. Model predictions suggest that intermittent recompression during exploration EVA may reduce decompression stress by 1.8% to 2.3% for 6 hours of total EVA time. Savings in gas consumables and crew time may be accumulated by abbreviating the EVA suit N2 purge to 2 minutes (20% N2) compared with 8 minutes (5% N2) at the expense of an increase in estimated decompression risk of up to 2.4% for an 8-hour EVA. ? Increased DCS risk could be offset by IR or by spending additional time at 6 psi at the beginning of the EVA. ? Savings of 0.48 lb of gas and 6 minutes per person per EVA corresponds to more than 31 hours of crew time and 1800 lb of gas and tankage under the Constellation lunar architecture. 6. Further research is needed to characterize and optimize breathing mixtures and intermittent recompression across the range of environments and operational conditions in which astronauts will live and work during future exploration missions. 7. Development of exploration prebreathe protocols will begin with definition of acceptable risk, followed by development of protocols based on models such as ours, and, ultimately, validation of protocols through ground trials before operational implementation.
A framework for solving ill-structured community problems
NASA Astrophysics Data System (ADS)
Keller, William Cotesworth
A multifaceted protocol for solving ill-structured community problems has been developed. It embodies the lessons learned from the past by refining and extending features of previous models from the systems thinkers, and the fields of behavioral decision making and creative problem solving. The protocol also embraces additional features needed to address the unique aspects of community decision situations. The essential elements of the protocol are participants from the community, a problem-solving process, a systems picture, a facilitator, a modified Delphi method of communications, and technical expertise. This interdisciplinary framework has been tested by a quasi experiment with a real world community problem (the high cost of electrical power on Long Island, NY). Results indicate the protocol can enable members of the community to understand a complicated, ill-structured problem and guide them to action to solve the issue. However, the framework takes time (over one year in the test case) and will be inappropriate for crises where quick action is needed.
NASA Astrophysics Data System (ADS)
Li, Yongfu; Li, Kezhi; Zheng, Taixiong; Hu, Xiangdong; Feng, Huizong; Li, Yinguo
2016-05-01
This study proposes a feedback-based platoon control protocol for connected autonomous vehicles (CAVs) under different network topologies of initial states. In particularly, algebraic graph theory is used to describe the network topology. Then, the leader-follower approach is used to model the interactions between CAVs. In addition, feedback-based protocol is designed to control the platoon considering the longitudinal and lateral gaps simultaneously as well as different network topologies. The stability and consensus of the vehicular platoon is analyzed using the Lyapunov technique. Effects of different network topologies of initial states on convergence time and robustness of platoon control are investigated. Results from numerical experiments demonstrate the effectiveness of the proposed protocol with respect to the position and velocity consensus in terms of the convergence time and robustness. Also, the findings of this study illustrate the convergence time of the control protocol is associated with the initial states, while the robustness is not affected by the initial states significantly.
Brandt, Claudia; Glien, Maike; Gastens, Alexandra M; Fedrowitz, Maren; Bethmann, Kerstin; Volk, Holger A; Potschka, Heidrun; Löscher, Wolfgang
2007-08-01
Levetiracetam (LEV) is a structurally novel antiepileptic drug (AED) which has demonstrated a broad spectrum of anticonvulsant activities both in experimental and clinical studies. Previous experiments in the kindling model suggested that LEV, in addition to its seizure-suppressing activity, may possess antiepileptogenic or disease-modifying activity. In the present study, we evaluated this possibility by using a rat model in which epilepsy with spontaneous recurrent seizures (SRS), behavioral alterations, and hippocampal damages develop after a status epilepticus (SE) induced by sustained electrical stimulation of the basal amygdala. Two experimental protocols were used. In the first protocol, LEV treatment was started 24h after onset of electrical amygdala stimulation without prior termination of the SE. In the second protocol, the SE was interrupted after 4h by diazepam, immediately followed by onset of treatment with LEV. Treatment with LEV was continued for 8 weeks (experiment #1) or 5 weeks (experiment #2) after SE, using continuous drug administration via osmotic minipumps. The occurrence of SRS was recorded during and after treatment. In addition, the rats were tested in a battery of behavioral tests, including the elevated-plus maze and the Morris water maze. Finally, the brains of the animals were analyzed for histological lesions in the hippocampal formation. With the experimental protocols chosen for these experiments, LEV did not exert antiepileptogenic or neuroprotective activity. Furthermore, the behavioral alterations, e.g., behavioral hyperexcitability and learning deficits, in epileptic rats were not affected by treatment with LEV after SE. These data do not support the idea that administration of LEV after SE prevents or reduces the long-term alterations developing after such brain insult in rats.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM) model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.
Protocols — EDRN Public Portal
EDRN investigators protocols. The following is a list of the EDRN protocols that have been captured and curated. Additional information will be added as it is available. Contact information is provided as part of the detail for each protocol.
ERIC Educational Resources Information Center
Valenzuela, Vanessa V.; Gutierrez, Gabriel; Lambros, Katina M.
2014-01-01
An A-B single-case design assessed at-risk students' responsiveness to mathematics interventions. Four culturally and linguistically diverse second-grade students were given a Tier 2 standard protocol mathematics intervention that included number sense instruction, modeling procedures, guided math drill and practice of addition and subtraction…
Quantum State Transfer via Noisy Photonic and Phononic Waveguides
NASA Astrophysics Data System (ADS)
Vermersch, B.; Guimond, P.-O.; Pichler, H.; Zoller, P.
2017-03-01
We describe a quantum state transfer protocol, where a quantum state of photons stored in a first cavity can be faithfully transferred to a second distant cavity via an infinite 1D waveguide, while being immune to arbitrary noise (e.g., thermal noise) injected into the waveguide. We extend the model and protocol to a cavity QED setup, where atomic ensembles, or single atoms representing quantum memory, are coupled to a cavity mode. We present a detailed study of sensitivity to imperfections, and apply a quantum error correction protocol to account for random losses (or additions) of photons in the waveguide. Our numerical analysis is enabled by matrix product state techniques to simulate the complete quantum circuit, which we generalize to include thermal input fields. Our discussion applies both to photonic and phononic quantum networks.
Government Open Systems Interconnection Profile (GOSIP) transition strategy
NASA Astrophysics Data System (ADS)
Laxen, Mark R.
1993-09-01
This thesis analyzes the Government Open Systems Interconnection Profile (GOSIP) and the requirements of the Federal Information Processing Standard (FIPS) Publication 146-1. It begins by examining the International Organization for Standardization (ISO) Open Systems Interconnection (OSI) architecture and protocol suites and the distinctions between GOSIP version one and two. Additionally, it explores some of the GOSIP protocol details and discusses the process by which standards organizations have developed their recommendations. Implementation considerations from both government and vendor perspectives illustrate the barriers and requirements faced by information systems managers, as well as basic transition strategies. The result of this thesis is to show a transition strategy through an extended and coordinated period of coexistence due to extensive legacy systems and GOSIP product unavailability. Recommendations for GOSIP protocol standards to include capabilities outside the OSI model are also presented.
Renan-Ordine, Rômulo; Alburquerque-Sendín, Francisco; de Souza, Daiana Priscila Rodrigues; Cleland, Joshua A; Fernández-de-Las-Peñas, César
2011-02-01
A randomized controlled clinical trial. To investigate the effects of trigger point (TrP) manual therapy combined with a self-stretching program for the management of patients with plantar heel pain. Previous studies have reported that stretching of the calf musculature and the plantar fascia are effective management strategies for plantar heel pain. However, it is not known if the inclusion of soft tissue therapy can further improve the outcomes in this population. Sixty patients, 15 men and 45 women (mean ± SD age, 44 ± 10 years) with a clinical diagnosis of plantar heel pain were randomly divided into 2 groups: a self-stretching (Str) group who received a stretching protocol, and a self-stretching and soft tissue TrP manual therapy (Str-ST) group who received TrP manual interventions (TrP pressure release and neuromuscular approach) in addition to the same self-stretching protocol. The primary outcomes were physical function and bodily pain domains of the quality of life SF-36 questionnaire. Additionally, pressure pain thresholds (PPT) were assessed over the affected gastrocnemii and soleus muscles, and over the calcaneus, by an assessor blinded to the treatment allocation. Outcomes of interest were captured at baseline and at a 1-month follow-up (end of treatment period). Mixed-model ANOVAs were used to examine the effects of the interventions on each outcome, with group as the between-subjects variable and time as the within-subjects variable. The primary analysis was the group-by-time interaction. The 2 × 2 mixed-model analysis of variance (ANOVA) revealed a significant group-by-time interaction for the main outcomes of the study: physical function (P = .001) and bodily pain (P = .005); patients receiving a combination of self-stretching and TrP tissue intervention experienced a greater improvement in physical function and a greater reduction in pain, as compared to those receiving the self-stretching protocol. The mixed ANOVA also revealed significant group-by-time interactions for changes in PPT over the gastrocnemii and soleus muscles, and the calcaneus (all P<.001). Patients receiving a combination of self-stretching and TrP tissue intervention showed a greater improvement in PPT, as compared to those who received only the self-stretching protocol. This study provides evidence that the addition of TrP manual therapies to a self-stretching protocol resulted in superior short-term outcomes as compared to a self-stretching program alone in the treatment of patients with plantar heel pain. Therapy, level 1b.
Kubas, Adam; Noak, Johannes
2017-01-01
Absorption and multiwavelength resonance Raman spectroscopy are widely used to investigate the electronic structure of transition metal centers in coordination compounds and extended solid systems. In combination with computational methodologies that have predictive accuracy, they define powerful protocols to study the spectroscopic response of catalytic materials. In this work, we study the absorption and resonance Raman spectra of the M1 MoVOx catalyst. The spectra were calculated by time-dependent density functional theory (TD-DFT) in conjunction with the independent mode displaced harmonic oscillator model (IMDHO), which allows for detailed bandshape predictions. For this purpose cluster models with up to 9 Mo and V metallic centers are considered to represent the bulk structure of MoVOx. Capping hydrogens were used to achieve valence saturation at the edges of the cluster models. The construction of model structures was based on a thorough bonding analysis which involved conventional DFT and local coupled cluster (DLPNO-CCSD(T)) methods. Furthermore the relationship of cluster topology to the computed spectral features is discussed in detail. It is shown that due to the local nature of the involved electronic transitions, band assignment protocols developed for molecular systems can be applied to describe the calculated spectral features of the cluster models as well. The present study serves as a reference for future applications of combined experimental and computational protocols in the field of solid-state heterogeneous catalysis. PMID:28989667
Fedy, B.C.; Aldridge, Cameron L.
2011-01-01
Long-term population monitoring is the cornerstone of animal conservation and management. The accuracy and precision of models developed using monitoring data can be influenced by the protocols guiding data collection. The greater sage-grouse (Centrocercus urophasianus) is a species of concern that has been monitored over decades, primarily, by counting the number of males that attend lek (breeding) sites. These lek count data have been used to assess long-term population trends and for multiple mechanistic studies. However, some studies have questioned the efficacy of lek counts to accurately identify population trends. In response, monitoring protocols were changed to have a goal of counting lek sites multiple times within a season. We assessed the influence of this change in monitoring protocols on model accuracy and precision applying generalized additive models to describe trends over time. We found that at large spatial scales including >50 leks, the absence of repeated counts within a year did not significantly alter population trend estimates or interpretation. Increasing sample size decreased the model confidence intervals. We developed a population trend model for Wyoming greater sage-grouse from 1965 to 2008, identifying significant changes in the population indices and capturing the cyclic nature of this species. Most sage-grouse declines in Wyoming occurred between 1965 and the 1990s and lek count numbers generally increased from the mid-1990s to 2008. Our results validate the combination of monitoring data collected under different protocols in past and future studies-provided those studies are addressing large-scale questions. We suggest that a larger sample of individual leks is preferable to multiple counts of a smaller sample of leks. ?? 2011 The Wildlife Society.
Real-time Measurement of Epithelial Barrier Permeability in Human Intestinal Organoids.
Hill, David R; Huang, Sha; Tsai, Yu-Hwai; Spence, Jason R; Young, Vincent B
2017-12-18
Advances in 3D culture of intestinal tissues obtained through biopsy or generated from pluripotent stem cells via directed differentiation, have resulted in sophisticated in vitro models of the intestinal mucosa. Leveraging these emerging model systems will require adaptation of tools and techniques developed for 2D culture systems and animals. Here, we describe a technique for measuring epithelial barrier permeability in human intestinal organoids in real-time. This is accomplished by microinjection of fluorescently-labeled dextran and imaging on an inverted microscope fitted with epifluorescent filters. Real-time measurement of the barrier permeability in intestinal organoids facilitates the generation of high-resolution temporal data in human intestinal epithelial tissue, although this technique can also be applied to fixed timepoint imaging approaches. This protocol is readily adaptable for the measurement of epithelial barrier permeability following exposure to pharmacologic agents, bacterial products or toxins, or live microorganisms. With minor modifications, this protocol can also serve as a general primer on microinjection of intestinal organoids and users may choose to supplement this protocol with additional or alternative downstream applications following microinjection.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
Vitrification and xenografting of human ovarian tissue.
Amorim, Christiani Andrade; Dolmans, Marie-Madeleine; David, Anu; Jaeger, Jonathan; Vanacker, Julie; Camboni, Alessandra; Donnez, Jacques; Van Langendonckt, Anne
2012-11-01
To assess the efficiency of two vitrification protocols to cryopreserve human preantral follicles with the use of a xenografting model. Pilot study. Gynecology research unit in a university hospital. Ovarian biopsies were obtained from seven women aged 30-41 years. Ovarian tissue fragments were subjected to one of three cryopreservation protocols (slow freezing, vitrification protocol 1, and vitrification protocol 2) and xenografted for 1 week to nude mice. The number of morphologically normal follicles after cryopreservation and grafting and fibrotic surface area were determined by histologic analysis. Apoptosis was assessed by the TUNEL method. Morphometric analysis of TUNEL-positive surface area also was performed. Follicle proliferation was evaluated by immunohistochemistry. After xenografting, a difference was observed between the cryopreservation procedures applied. According to TUNEL analysis, both vitrification protocols showed better preservation of preantral follicles than the conventional freezing method. Moreover, histologic evaluation showed a significantly higher proportion of primordial follicles in vitrified (protocol 2)-warmed ovarian tissue than in frozen-thawed tissue. The proportion of growing follicles and fibrotic surface area was similar in all groups. Vitrification procedures appeared to preserve not only the morphology and survival of preantral follicles after 1 week of xenografting, but also their ability to resume folliculogenesis. In addition, vitrification protocol 2 had a positive impact on the quiescent state of primordial follicles after xenografting. Copyright © 2012 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2008-01-01
This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.
Khairnar, Amit; Latta, Peter; Drazanova, Eva; Ruda-Kucerova, Jana; Szabó, Nikoletta; Arab, Anas; Hutter-Paier, Birgit; Havas, Daniel; Windisch, Manfred; Sulcova, Alexandra; Starcuk, Zenon; Rektorova, Irena
2015-11-01
Evidence suggests that accumulation and aggregation of α-synuclein contribute to the pathogenesis of Parkinson's disease (PD). The aim of this study was to evaluate whether diffusion kurtosis imaging (DKI) will provide a sensitive tool for differentiating between α-synuclein-overexpressing transgenic mouse model of PD (TNWT-61) and wild-type (WT) littermates. This experiment was designed as a proof-of-concept study and forms a part of a complex protocol and ongoing translational research. Nine-month-old TNWT-61 mice and age-matched WT littermates underwent behavioral tests to monitor motor impairment and MRI scanning using 9.4 Tesla system in vivo. Tract-based spatial statistics (TBSS) and the DKI protocol were used to compare the whole brain white matter of TNWT-61 and WT mice. In addition, region of interest (ROI) analysis was performed in gray matter regions such as substantia nigra, striatum, hippocampus, sensorimotor cortex, and thalamus known to show higher accumulation of α-synuclein. For the ROI analysis, both DKI (6 b-values) protocol and conventional (2 b-values) diffusion tensor imaging (cDTI) protocol were used. TNWT-61 mice showed significant impairment of motor coordination. With the DKI protocol, mean, axial, and radial kurtosis were found to be significantly elevated, whereas mean and radial diffusivity were decreased in the TNWT-61 group compared to that in the WT controls with both TBSS and ROI analysis. With the cDTI protocol, the ROI analysis showed decrease in all diffusivity parameters in TNWT-61 mice. The current study provides evidence that DKI by providing both kurtosis and diffusivity parameters gives unique information that is complementary to cDTI for in vivo detection of pathological changes that underlie PD-like symptomatology in TNWT-61 mouse model of PD. This result is a crucial step in search for a candidate diagnostic biomarker with translational potential and relevance for human studies.
An Abbreviated Protocol for High-Risk Screening Breast MRI Saves Time and Resources.
Harvey, Susan C; Di Carlo, Phillip A; Lee, Bonmyong; Obadina, Eniola; Sippo, Dorothy; Mullen, Lisa
2016-04-01
To review the ability of an abbreviated, high-risk, screening, breast MRI protocol to detect cancer and save resources. High-risk screening breast MR images were reviewed, from both an abbreviated protocol and a full diagnostic protocol. Differences in cancer detection, scanner utilization, interpretation times, and need for additional imaging were recorded in an integrated data form, and reviewed and compared. A total of 568 MRI cases were reviewed, with the abbreviated and full protocols. No difference was found in the number of cancers detected. Scan times were decreased by 18.8 minutes per case, for a total of 10,678 minutes (178 hours). Interpretation time, on average, was 1.55 minutes for the abbreviated protocol, compared with 6.43 minutes for the full protocol. Review of the full protocol led to a significant change in the final BI-RADS(®) assessment in 12 of 568 (2.1%) cases. Abbreviated MRI is as effective as full-protocol MRI for demonstration of cancers in the high-risk screening setting, with only 12 (2.1%) cases recommended for additional MRI evaluation. The efficiency and resource savings of an abbreviated protocol would be significant, and would allow for opportunities to provide MRI for additional patients, as well as improved radiologist time management and workflow, with the potential to add real-time MRI interpretation or double reading. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
An Abbreviated Protocol for High-Risk Screening Breast MRI Saves Time and Resources.
Harvey, Susan C; Di Carlo, Phillip A; Lee, Bonmyong; Obadina, Eniola; Sippo, Dorothy; Mullen, Lisa
2016-11-01
To review the ability of an abbreviated, high-risk, screening, breast MRI protocol to detect cancer and save resources. High-risk screening breast MR images were reviewed, from both an abbreviated protocol and a full diagnostic protocol. Differences in cancer detection, scanner utilization, interpretation times, and need for additional imaging were recorded in an integrated data form, and reviewed and compared. A total of 568 MRI cases were reviewed, with the abbreviated and full protocols. No difference was found in the number of cancers detected. Scan times were decreased by 18.8 minutes per case, for a total of 10,678 minutes (178 hours). Interpretation time, on average, was 1.55 minutes for the abbreviated protocol, compared with 6.43 minutes for the full protocol. Review of the full protocol led to a significant change in the final BI-RADS ® assessment in 12 of 568 (2.1%) cases. Abbreviated MRI is as effective as full-protocol MRI for demonstration of cancers in the high-risk screening setting, with only 12 (2.1 %) cases recommended for additional MRI evaluation. The efficiency and resource savings of an abbreviated protocol would be significant, and would allow for opportunities to provide MRI for additional patients, as well as improved radiologist time management and workflow, with the potential to add real-time MRI interpretation or double reading. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Respiratory Therapist Job Perceptions: The Impact of Protocol Use.
Metcalf, Ashley Y; Stoller, James K; Habermann, Marco; Fry, Timothy D
2015-11-01
Demand for respiratory care services and staffing levels of respiratory therapists (RTs) is expected to increase over the next several years. Hence, RT job satisfaction will be a critical factor in determining recruitment and retention of RTs. Determinants of RT job satisfaction measures have received little attention in the literature. This study examines the use of respiratory care protocols and associated levels of RT job satisfaction, turnover intentions, and job stress. Four-hundred eighty-one RTs at 44 hospitals responded to an online survey regarding job satisfaction, turnover intentions, and job stress. Random coefficient modeling was used for analysis and to account for the nested structure of the data. Higher levels of RT protocol use were associated with higher levels of job satisfaction, lower rates of turnover intentions, and lower levels of job stress. In addition, RTs with greater experience had higher levels of job satisfaction, and RTs working at teaching hospitals had lower rates of turnover intentions. The study extends prior research by examining how the use of respiratory care protocols favorably affects RTs' perceptions of job satisfaction, turnover intention, and job stress. In a time of increasing demand for respiratory care services, protocols may enhance retention of RTs. Copyright © 2015 by Daedalus Enterprises.
Updated archaeointensity dataset from the SW Pacific
NASA Astrophysics Data System (ADS)
Hill, Mimi; Nilsson, Andreas; Holme, Richard; Hurst, Elliot; Turner, Gillian; Herries, Andy; Sheppard, Peter
2016-04-01
It is well known that there are far more archaeomagnetic data from the Northern Hemisphere than from the Southern. Here we present a compilation of archaeointensity data from the SW Pacific region covering the past 3000 years. The results have primarily been obtained from a collection of ceramics from the SW Pacific Islands including Fiji, Tonga, Papua New Guinea, New Caledonia and Vanuatu. In addition we present results obtained from heated clay balls from Australia. The microwave method has predominantly been used with a variety of experimental protocols including IZZI and Coe variants. Standard Thellier archaeointensity experiments using the IZZI protocol have also been carried out on selected samples. The dataset is compared to regional predictions from current global geomagnetic field models, and the influence of the new data on constraining the pfm9k family of global geomagnetic field models is explored.
Guzauskas, Gregory F; Villa, Kathleen F; Vanhove, Geertrui F; Fisher, Vicki L; Veenstra, David L
2017-03-01
To estimate the risk-benefit trade-off of a pediatric-inspired regimen versus hyperfractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone (hyper-CVAD) for first-line treatment of adolescents/young adult (AYA; ages 16-39 years) patients with Philadelphia-negative acute lymphoblastic leukemia. Patient outcomes were simulated using a 6-state Markov model, including complete response (CR), no CR, first relapse, second CR, second relapse, and death. A Weibull distribution was fit to the progression-free survival curve of hyper-CVAD-treated AYA patients from a single-center study, and comparable patient data from a retrospective study of pediatric regimen-treated AYA patients were utilized to estimate a relative progression difference (hazard ratio = 0.51) and model survival differences. Health-state utilities were estimated based on treatment stage, with an assumption that the pediatric protocol had 0.10 disutility compared with hyper-CVAD before the maintenance phase of treatment. Total life-years and quality-adjusted life-years (QALYs) were compared between treatment protocols at 1, 5, and 10 years, with additional probabilistic sensitivity analyses. Treatment with the pediatric-inspired protocol was associated with a 0.04 increase in life-years, but a 0.01 decrease in QALYs at 1 year. By years 5 and 10, the pediatric-inspired protocol resulted in 0.18 and 0.24 increase in life-years and 0.25 and 0.32 increase in QALYs, respectively, relative to hyper-CVAD. The lower quality of life associated with the induction and intensification phases of pediatric treatment was offset by more favorable progression-free survival and overall survival relative to hyper-CVAD. Our exploratory analysis suggests that, compared with hyper-CVAD, pediatric-inspired protocols may increase life-years throughout treatment stages and QALYs in the long term.
Hawley, Philippa Helen; Byeon, Jai Jun
2008-05-01
Constipation is a common and distressing condition in patients with cancer, especially those taking opioid analgesics. Many institutions prevent and treat constipation with titrated laxatives, which is known as a bowel protocol. An effective and well-tolerated bowel protocol is a very important component of cancer care, and there is little evidence on which to base selection of the most appropriate agents. This study compares a protocol of the stimulant laxative sennosides alone with a protocol of sennosides plus the stool softener docusate, in hospitalized patients at an oncology center. The docusate-containing protocol had an initial docusate-only step for patients not taking opioids, and four to six 100-mg capsules of docusate sodium in addition to the sennosides for the rest of the protocol. Thirty patients received the sennosides-only (S) protocol and 30 the sennosides plus docusate (DS) protocol. The efficacy and adverse effects of the protocols were monitored for 5-12 days. The two protocols were used sequentially, creating two cohorts, one on each protocol. Eighty percent of patients were taking oral opioids and 72% were admitted for symptom control/supportive care. Over a total of 488 days of observation it was found that the S protocol produced more bowel movements than the DS protocol, and in the symptom control/supportive care patients this difference was statistically significant (p < 0.05). In the S group admitted for symptom control/supportive care 62.5% had a bowel movement more than 50% of days, as compared with 32% in those receiving the DS protocol. Fifty-seven percent of the DS group required additional interventions (lactulose, suppositories or enemas) compared to 40% in the S group. Cramps were reported equally by 3 (10%) patients in each group. Eight patients (27%) experienced diarrhea in the S group compared to 4 (13%) in the DS group. The addition of the initial docusate-only step and adding docusate 400-600 mg/d to the sennosides did not reduce bowel cramps, and was less effective in inducing laxation than the sennosides-only protocol. Further research into the appropriate use of docusate and into the details of bowel protocol design are required.
NASA Astrophysics Data System (ADS)
Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva
2012-10-01
The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach. This study describes and evaluates a computer-based simulation to train advanced placement high school science students in laboratory protocols, a transgenic mouse model was produced. A simulation module on preparing a gene construct in the molecular biology lab was evaluated using a randomized clinical control design with advanced placement high school biology students in Mercedes, Texas ( n = 44). Pre-post tests assessed procedural and declarative knowledge, time on task, attitudes toward computers for learning and towards science careers. Students who used the simulation increased their procedural and declarative knowledge regarding molecular biology compared to those in the control condition (both p < 0.005). Significant increases continued to occur with additional use of the simulation ( p < 0.001). Students in the treatment group became more positive toward using computers for learning ( p < 0.001). The simulation did not significantly affect attitudes toward science in general. Computer simulation of complex transgenic protocols have potential to provide a "virtual" laboratory experience as an adjunct to conventional educational approaches.
Two-Stage Residual Inclusion Estimation in Health Services Research and Health Economics.
Terza, Joseph V
2018-06-01
Empirical analyses in health services research and health economics often require implementation of nonlinear models whose regressors include one or more endogenous variables-regressors that are correlated with the unobserved random component of the model. In such cases, implementation of conventional regression methods that ignore endogeneity will likely produce results that are biased and not causally interpretable. Terza et al. (2008) discuss a relatively simple estimation method that avoids endogeneity bias and is applicable in a wide variety of nonlinear regression contexts. They call this method two-stage residual inclusion (2SRI). In the present paper, I offer a 2SRI how-to guide for practitioners and a step-by-step protocol that can be implemented with any of the popular statistical or econometric software packages. We introduce the protocol and its Stata implementation in the context of a real data example. Implementation of 2SRI for a very broad class of nonlinear models is then discussed. Additional examples are given. We analyze cigarette smoking as a determinant of infant birthweight using data from Mullahy (1997). It is hoped that the discussion will serve as a practical guide to implementation of the 2SRI protocol for applied researchers. © Health Research and Educational Trust.
Using semantics for representing experimental protocols.
Giraldo, Olga; García, Alexander; López, Federico; Corcho, Oscar
2017-11-13
An experimental protocol is a sequence of tasks and operations executed to perform experimental research in biological and biomedical areas, e.g. biology, genetics, immunology, neurosciences, virology. Protocols often include references to equipment, reagents, descriptions of critical steps, troubleshooting and tips, as well as any other information that researchers deem important for facilitating the reusability of the protocol. Although experimental protocols are central to reproducibility, the descriptions are often cursory. There is the need for a unified framework with respect to the syntactic structure and the semantics for representing experimental protocols. In this paper we present "SMART Protocols ontology", an ontology for representing experimental protocols. Our ontology represents the protocol as a workflow with domain specific knowledge embedded within a document. We also present the S ample I nstrument R eagent O bjective (SIRO) model, which represents the minimal common information shared across experimental protocols. SIRO was conceived in the same realm as the Patient Intervention Comparison Outcome (PICO) model that supports search, retrieval and classification purposes in evidence based medicine. We evaluate our approach against a set of competency questions modeled as SPARQL queries and processed against a set of published and unpublished protocols modeled with the SP Ontology and the SIRO model. Our approach makes it possible to answer queries such as Which protocols use tumor tissue as a sample. Improving reporting structures for experimental protocols requires collective efforts from authors, peer reviewers, editors and funding bodies. The SP Ontology is a contribution towards this goal. We build upon previous experiences and bringing together the view of researchers managing protocols in their laboratory work. Website: https://smartprotocols.github.io/ .
Katoh, Masakazu; Hamajima, Fumiyasu; Ogasawara, Takahiro; Hata, Ken-Ichiro
2009-06-01
A validation study of an in vitro skin irritation testing method using a reconstructed human skin model has been conducted by the European Centre for the Validation of Alternative Methods (ECVAM), and a protocol using EpiSkin (SkinEthic, France) has been approved. The structural and performance criteria of skin models for testing are defined in the ECVAM Performance Standards announced along with the approval. We have performed several evaluations of the new reconstructed human epidermal model LabCyte EPI-MODEL, and confirmed that it is applicable to skin irritation testing as defined in the ECVAM Performance Standards. We selected 19 materials (nine irritants and ten non-irritants) available in Japan as test chemicals among the 20 reference chemicals described in the ECVAM Performance Standard. A test chemical was applied to the surface of the LabCyte EPI-MODEL for 15 min, after which it was completely removed and the model then post-incubated for 42 hr. Cell v iability was measured by MTT assay and skin irritancy of the test chemical evaluated. In addition, interleukin-1 alpha (IL-1alpha) concentration in the culture supernatant after post-incubation was measured to provide a complementary evaluation of skin irritation. Evaluation of the 19 test chemicals resulted in 79% accuracy, 78% sensitivity and 80% specificity, confirming that the in vitro skin irritancy of the LabCyte EPI-MODEL correlates highly with in vivo skin irritation. These results suggest that LabCyte EPI-MODEL is applicable to the skin irritation testing protocol set out in the ECVAM Performance Standards.
Berman, D Wayne
2011-08-01
Given that new protocols for assessing asbestos-related cancer risk have recently been published, questions arise concerning how they compare to the "IRIS" protocol currently used by regulators. The newest protocols incorporate findings from 20 additional years of literature. Thus, differences between the IRIS and newer Berman and Crump protocols are examined to evaluate whether these protocols can be reconciled. Risks estimated by applying these protocols to real exposure data from both laboratory and field studies are also compared to assess the relative health protectiveness of each protocol. The reliability of risks estimated using the two protocols are compared by evaluating the degree with which each potentially reproduces the known epidemiology study risks. Results indicate that the IRIS and Berman and Crump protocols can be reconciled; while environment-specific variation within fiber type is apparently due primarily to size effects (not addressed by IRIS), the 10-fold (average) difference between amphibole asbestos risks estimated using each protocol is attributable to an arbitrary selection of the lowest of available mesothelioma potency factors in the IRIS protocol. Thus, the IRIS protocol may substantially underestimate risk when exposure is primarily to amphibole asbestos. Moreover, while the Berman and Crump protocol is more reliable than the IRIS protocol overall (especially for predicting amphibole risk), evidence is presented suggesting a new fiber-size-related adjustment to the Berman and Crump protocol may ultimately succeed in reconciling the entire epidemiology database. However, additional data need to be developed before the performance of the adjusted protocol can be fully validated. © 2011 Society for Risk Analysis.
Intermittent Fasting: Is the Wait Worth the Weight?
Stockman, Mary-Catherine; Thomas, Dylan; Burke, Jacquelyn; Apovian, Caroline M
2018-06-01
We review the underlying mechanisms and potential benefits of intermittent fasting (IF) from animal models and recent clinical trials. Numerous variations of IF exist, and study protocols vary greatly in their interpretations of this weight loss trend. Most human IF studies result in minimal weight loss and marginal improvements in metabolic biomarkers, though outcomes vary. Some animal models have found that IF reduces oxidative stress, improves cognition, and delays aging. Additionally, IF has anti-inflammatory effects, promotes autophagy, and benefits the gut microbiome. The benefit-to-harm ratio varies by model, IF protocol, age at initiation, and duration. We provide an integrated perspective on potential benefits of IF as well as key areas for future investigation. In clinical trials, caloric restriction and IF result in similar degrees of weight loss and improvement in insulin sensitivity. Although these data suggest that IF may be a promising weight loss method, IF trials have been of moderate sample size and limited duration. More rigorous research is needed.
Dedeurwaerdere, Tom; Melindi-Ghidi, Paolo; Broggiato, Arianna
2016-01-01
This paper aims to get a better understanding of the motivational and transaction cost features of building global scientific research commons, with a view to contributing to the debate on the design of appropriate policy measures under the recently adopted Nagoya Protocol. For this purpose, the paper analyses the results of a world-wide survey of managers and users of microbial culture collections, which focused on the role of social and internalized motivations, organizational networks and external incentives in promoting the public availability of upstream research assets. Overall, the study confirms the hypotheses of the social production model of information and shareable goods, but it also shows the need to complete this model. For the sharing of materials, the underlying collaborative economy in excess capacity plays a key role in addition to the social production, while for data, competitive pressures amongst scientists tend to play a bigger role.
Herrlinger, Stephanie A; Shao, Qiang; Ma, Li; Brindley, Melinda; Chen, Jian-Fu
2018-04-26
The Zika virus (ZIKV) is a flavivirus currently endemic in North, Central, and South America. It is now established that the ZIKV can cause microcephaly and additional brain abnormalities. However, the mechanism underlying the pathogenesis of ZIKV in the developing brain remains unclear. Intracerebral surgical methods are frequently used in neuroscience research to address questions about both normal and abnormal brain development and brain function. This protocol utilizes classical surgical techniques and describes methods that allow one to model ZIKV-associated human neurological disease in the mouse nervous system. While direct brain inoculation does not model the normal mode of virus transmission, the method allows investigators to ask targeted questions concerning the consequence after ZIKV infection of the developing brain. This protocol describes embryonic, neonatal, and adult stages of intraventricular inoculation of ZIKV. Once mastered, this method can become a straightforward and reproducible technique that only takes a few hours to perform.
John, Seby; Thompson, Nicolas R; Lesko, Terry; Papesh, Nancy; Obuchowski, Nancy; Tomic, Dan; Wisco, Dolora; Khawaja, Zeshaun; Uchino, Ken; Man, Shumei; Cheng-Ching, Esteban; Toth, Gabor; Masaryk, Thomas; Ruggieri, Paul; Modic, Michael; Hussain, Muhammad Shazam
2017-10-01
Patient selection is important to determine the best candidates for endovascular stroke therapy. In application of a hyperacute magnetic resonance imaging (MRI) protocol for patient selection, we have shown decreased utilization with improved outcomes. A cost analysis comparing the pre- and post-MRI protocol time periods was performed to determine if the previous findings translated into cost opportunities. We retrospectively identified individuals considered for endovascular stroke therapy from January 2008 to August 2012 who were ≤8 h from stroke symptoms onset. Patients prior to April 30, 2010 were selected based on results of the computed tomography/computed tomography angiography alone (pre-hyperacute), whereas patients after April 30, 2010 were selected based on results of MRI (post-hyperacute MRI). Demographic, outcome, and financial information was collected. Log-transformed average daily direct costs were regressed on time period. The regression model included demographic and clinical covariates as potential confounders. Multiple imputation was used to account for missing data. We identified 267 patients in our database (88 patients in pre-hyperacute MRI period, 179 in hyperacute MRI protocol period). Patient length of stay was not significantly different in the hyperacute MRI protocol period as compared to the pre-hyperacute MRI period (10.6 vs. 9.9 days, p < 0.42). The median of average daily direct costs was reduced by 24.5% (95% confidence interval 14.1-33.7%, p < 0.001). Use of the hyperacute MRI protocol translated into reduced costs, in addition to reduced utilization and better outcomes. MRI selection of patients is an effective strategy, both for patients and hospital systems.
Animal Models of Mycobacteria Infection
Ordway, Diane J.; Orme, Ian M.
2011-01-01
This unit describes the infection of mice and guinea pigs with mycobacteria via various routes, as well as necropsy methods for the determination of mycobacterial loads within target organs. Additionally, methods for cultivating mycobacteria and preparing stocks are described. The protocols outlined are primarily used for M. tuberculosis, but can also be used for the study of other non-tuberculosis mycobacterial species. PMID:18432756
Particle bombardment - mediated gene transfer and GFP transient expression in Seteria viridis.
Mookkan, Muruganantham
2018-04-03
Setaria viridis is one of the most important model grasses in studying monocot plant biology. Transient gene expression study is a very important tool in plant biotechnology, functional genomics, and CRISPR-Cas9 genome editing technology via particle bombardment. In this study, a particle bombardment-mediated protocol was developed to introduce DNA into Setaria viridis in vitro leaf explants. In addition, physical and biological parameters, such as helium pressure, distance from stopping screen to the target tissues, DNA concentration, and number of bombardments, were tested and optimized. Optimum concentration of transient GFP expression was achieved using 1.5 ug plasmid DNA with 0.6 mm gold particles and 6 cm bombardment distance, using 1,100 psi. Doubling the bombardment instances provides the maximum number of foci of transient GFP expression. This simple protocol will be helpful for genomics studies in the S. viridis monocot model.
DEVELOPMENT OF MODELING PROTOCOLS FOR USE IN DETERMINING SEDIMENT TMDLS
Modeling protocols for use in determining sediment TMDLs are being developed to provide the Office of Water, Regions and the States with assistance in determining TMDLs for sediment impaired water bodies. These protocols will supplement the protocols developed by the Office of W...
Automating Security Protocol Analysis
2004-03-01
language that allows easy representation of pattern interaction. Using CSP, Lowe tests whether a protocol achieves authentication. In the case of...only to correctly code whatever protocol they intend to evaluate. The tool, OCaml 3.04 [1], translates the protocol into Horn clauses and then...model protocol transactions. One example of automated modeling software is Maude [19]. Maude was the intended language for this research, but Java
Bradshaw, Richard T; Essex, Jonathan W
2016-08-09
Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.
NASA Astrophysics Data System (ADS)
Lau, Katherine; Isabelle, Martin; Lloyd, Gavin R.; Old, Oliver; Shepherd, Neil; Bell, Ian M.; Dorney, Jennifer; Lewis, Aaran; Gaifulina, Riana; Rodriguez-Justo, Manuel; Kendall, Catherine; Stone, Nicolas; Thomas, Geraint; Reece, David
2016-03-01
Despite the demonstrated potential as an accurate cancer diagnostic tool, Raman spectroscopy (RS) is yet to be adopted by the clinic for histopathology reviews. The Stratified Medicine through Advanced Raman Technologies (SMART) consortium has begun to address some of the hurdles in its adoption for cancer diagnosis. These hurdles include awareness and acceptance of the technology, practicality of integration into the histopathology workflow, data reproducibility and availability of transferrable models. We have formed a consortium, in joint efforts, to develop optimised protocols for tissue sample preparation, data collection and analysis. These protocols will be supported by provision of suitable hardware and software tools to allow statistically sound classification models to be built and transferred for use on different systems. In addition, we are building a validated gastrointestinal (GI) cancers model, which can be trialled as part of the histopathology workflow at hospitals, and a classification tool. At the end of the project, we aim to deliver a robust Raman based diagnostic platform to enable clinical researchers to stage cancer, define tumour margin, build cancer diagnostic models and discover novel disease bio markers.
Méndez, Rose Marie; Kulbok, Pamela; Lawson, Sarah; Matos, Abigail
2013-01-01
Sexual violence is a public health problem in Puerto Rico (PR), with an incidence of 7.4 cases for every 10,000 people during 2005-2006 (Departamento de Salud Secretaría Auxiliar de Salud Familiar y Servicios Integrados, 2007). Findings from the literature review indicated that the traditional model of care provided to the victims of sexual violence in the Emergency Department is incomplete; furthermore, it may cause revictimization because of the attitudes, behaviors, and practices of the community service providers, resulting in additional trauma. Emerging evidence demonstrates that Sexual Assault Nurse Examiner (SANE) programs are providing effective quality care. In PR, SANEs do not intervene in sexual assault cases; nevertheless, the Department of Health of PR has recognized the importance of SANE intervention. Consequently, there is a need for current evidence-based protocols and standards of care to describe the procedures, roles, and responsibilities for the provision of quality care to victims. This project involves the implementation of the Stufflebeam's Context-Input-Process-Product Model in the creation of the Commonwealth of Puerto Rico National Protocol for the Management of Victims of Sexual Violence: Adults/Adolescents.
Hseu, Zeng-Yei; Zehetner, Franz
2014-01-01
This study compared the extractability of Cd, Cu, Ni, Pb, and Zn by 8 extraction protocols for 22 representative rural soils in Taiwan and correlated the extractable amounts of the metals with their uptake by Chinese cabbage for developing an empirical model to predict metal phytoavailability based on soil properties. Chemical agents in these protocols included dilute acids, neutral salts, and chelating agents, in addition to water and the Rhizon soil solution sampler. The highest concentrations of extractable metals were observed in the HCl extraction and the lowest in the Rhizon sampling method. The linear correlation coefficients between extractable metals in soil pools and metals in shoots were higher than those in roots. Correlations between extractable metal concentrations and soil properties were variable; soil pH, clay content, total metal content, and extractable metal concentration were considered together to simulate their combined effects on crop uptake by an empirical model. This combination improved the correlations to different extents for different extraction methods, particularly for Pb, for which the extractable amounts with any extraction protocol did not correlate with crop uptake by simple correlation analysis. PMID:25295297
NASA Technical Reports Server (NTRS)
Cohen, Richard
1999-01-01
Alterations in cardiovascular regulation and function that occur during and after space flight have been reported. These alterations are manifested, for example, by reduced orthostatic tolerance upon reentry to the earth's gravity from space. However, the precise physiologic mechanisms responsible for these alterations remain to be fully elucidated. Perhaps, as a result, effective countermeasures have yet to be developed. In this project we apply a powerful, new method - cardiovascular system identification (CSI) - for the study of the effects of space flight on the cardiovascular system so that effective countermeasures can be developed. CSI involves the mathematical analysis of second-to-second fluctuations in non-invasively measured heart rate, arterial blood pressure (ABP), and instantaneous lung volume (ILV - respiratory activity) in order to characterize quantitatively the physiologic mechanisms responsible for the couplings between these signals. Through the characterization of all the physiologic mechanisms coupling these signals, CSI provides a model of the closed-loop cardiovascular regulatory state in an individual subject. The model includes quantitative descriptions of the heart rate baroreflex, autonomic function, as well as other important physiologic mechanisms. We are in the process of incorporating beat-to-beat fluctuations of stroke volume into the CSI technique in order to quantify additional physiologic mechanisms such as those involved in control of peripheral vascular resistance and alterations in cardiac contractility. We apply CSI in conjunction with the two general protocols of the Human Studies Core project. The first protocol involves ground-based, human head down tilt bed rest to simulate microgravity and acute stressors - upright tilt, standing and bicycle exercise - to provide orthostatic and exercise challenges. The second protocol is intended to be the same as the first but with the addition of sleep deprivation to determine whether this contributes to cardiovascular alterations. In these studies, we focus on the basic physiologic mechanisms responsible for the alterations in cardiovascular regulation and function during the simulated microgravity in order to formulate hypotheses regarding what countermeasures are likely to be most effective. Compared to our original proposal, the protocol we are using has been slightly modified to lengthen the bed rest period to 16 days and streamline the data collection. These modifications provide us data on a longer bed rest period and have enabled us to increase our subject throughput. Based on review of our preliminary data we have decided to test a countermeasure which is applied the very end of the bed rest period. We will use the same bed rest protocol to test this countermeasure. We anticipate completing the baseline data collection in our first protocol plus testing of the countermeasure in an additional eight subjects, at which time we plan to initiate the second protocol which includes sleep deprivation. In future studies, we plan to apply CSI to test other potential countermeasures in conjunction with the same bed rest, sleep deprivation and acute stressor models. We also anticipate applying CSI for studying astronauts before and after space flight and ultimately, during space flight. The application of CSI is providing information relevant to the development and evaluation of effective countermeasures allowing humans to adapt appropriately upon re-exposure to a gravity field, and to live and work for longer periods of time in microgravity.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Mount Rainier National Park and Olympic National Park elk monitoring program annual report 2011
Happe, Patricia J.; Reid, Mason; Griffin, Paul C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Tirhi, Michelle; McCorquodale, Scott
2013-01-01
Fiscal year 2011 was the first year of implementing an approved elk monitoring protocol in Mount Rainier (MORA) and Olympic (OLYM) National Parks in the North Coast and Cascades Network (NCCN) (Griffin et al. 2012). However, it was the fourth and second year of gathering data according to protocol in MORA and OLYM respectively; data gathered during the protocol development phase followed procedures that are laid out in the protocol. Elk monitoring in these large wilderness parks relies on aerial surveys from a helicopter. Summer surveys are intended to provide quantitative estimates of abundance, sex and age composition, and distribution of migratory elk in high elevation trend count areas. An unknown number of elk is not detected during surveys; however the protocol estimates the number of missed elk by applying a model that accounts for detection bias. Detection bias in elk surveys in MORA is estimated using a double-observer sightability model that was developed using survey data from 2008-2010 (Griffin et al. 2012). That model was developed using elk that were previously equipped with radio collars by cooperating tribes. At the onset of protocol development in OLYM there were no existing radio-collars on elk. Consequently the majority of the effort in OLYM in the past 4 years has been focused on capturing and radio-collaring elk and conducting sightability trials needed to develop a double-observer sightability model in OLYM. In this annual report we provide estimates of abundance and composition for MORA elk, raw counts of elk made in OLYM, and describe sightability trials conducted in OLYM. At MORA the North trend count area was surveyed twice and the South once (North Rainier herd, and South Rainier herd). We counted 373 and 267 elk during two replicate surveys of the North Rainier herd, and 535 elk in the South Rainier herd. Using the model, we estimated that 413 and 320 elk were in the North and 652 elk were in the South trend count areas during the time of the respective surveys. At OLYM, the Core and Northwest trend count areas were completely surveyed, as were portions of the Quinault. In addition, we surveyed 10 survey units specifically to get resight data. Two-hundred and forty eight elk were counted in the Core, 19 in the Northwest, and 169 in the Quinault. We conducted double-observer sightability trials associated with 14 collared elk groups for use in developing the double-observer sightability model for OLYM.
Kwon, Sung Woo; Kim, Young Jin; Shim, Jaemin; Sung, Ji Min; Han, Mi Eun; Kang, Dong Won; Kim, Ji-Ye; Choi, Byoung Wook; Chang, Hyuk-Jae
2011-04-01
To evaluate the prognostic outcome of cardiac computed tomography (CT) for prediction of major adverse cardiac events (MACEs) in low-risk patients suspected of having coronary artery disease (CAD) and to explore the differential prognostic values of coronary artery calcium (CAC) scoring and coronary CT angiography. Institutional review committee approval and informed consent were obtained. In 4338 patients who underwent 64-section CT for evaluation of suspected CAD, both CAC scoring and CT angiography were concurrently performed by using standard scanning protocols. Follow-up clinical outcome data regarding composite MACEs were procured. Multivariable Cox proportional hazards models were developed to predict MACEs. Risk-adjusted models incorporated traditional risk factors for CAC scoring and coronary CT angiography. During the mean follow-up of 828 days ± 380, there were 105 MACEs, for an event rate of 3%. The presence of obstructive CAD at coronary CT angiography had independent prognostic value, which escalated according to the number of stenosed vessels (P < .001). In the receiver operating characteristic curve (ROC) analysis, the superiority of coronary CT angiography to CAC scoring was demonstrated by a significantly greater area under the ROC curve (AUC) (0.892 vs 0.810, P < .001), whereas no significant incremental value for the addition of CAC scoring to coronary CT angiography was established (AUC = 0.892 for coronary CT angiography alone vs 0.902 with addition of CAC scoring, P = .198). Coronary CT angiography is better than CAC scoring in predicting MACEs in low-risk patients suspected of having CAD. Furthermore, the current standard multisection CT protocol (coronary CT angiography combined with CAC scoring) has no incremental prognostic value compared with coronary CT angiography alone. Therefore, in terms of determining prognosis, CAC scoring may no longer need to be incorporated in the cardiac CT protocol in this population. © RSNA, 2011.
77 FR 6094 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-07
....-International Atomic Energy Agency Additional Protocol. Under the U.S.-International Atomic Energy Agency (IAEA...-related activities to the IAEA and potentially provide access to IAEA inspectors for verification purposes. The U.S.-IAEA Additional Protocol permits the United States unilaterally to declare exclusions from...
Van Berkel, Megan A; MacDermott, Jennifer; Dungan, Kathleen M; Cook, Charles H; Murphy, Claire V
2017-12-01
Although studies demonstrate techniques to limit hypoglycaemia in critically ill patients, there are limited data supporting methods to improve management of existing hypoglycaemia. Assess the impact and sustainability of a computerised, three tiered, nurse driven protocol for hypoglycaemia treatment. Retrospective pre and post protocol study. Neurosciences and surgical intensive care units at a tertiary academic medical centre. Patients with a hypoglycaemic episode were included during a pre-protocol or post-protocol implementation period. An additional six-month cohort was evaluated to assess sustainability. Fifty-four patients were included for evaluation (35 pre- and 19 post-protocol); 122 patients were included in the sustainability cohort. Hypoglycaemia treatment significantly improved in the post-protocol cohort (20% vs. 52.6%, p=0.014); with additional improvement to 79.5% in the sustainability cohort. Time to follow-up blood glucose was decreased after treatment from 122 [Q1-Q3: 46-242] minutes pre-protocol to 25 [Q1-Q3: 9-48] minutes post protocol (p<0.0001). This reduction was maintained in the sustainability cohort [median of 29min (Q1-Q3: 20-51)]. Implementation of a nurse-driven, three-tiered protocol for treatment of hypoglyacemia significantly improved treatment rates, as well as reduced time to recheck blood glucose measurement. These benefits were sustained during a six-month period after protocol implementation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Incorporating thyroid markers in Down syndrome screening protocols.
Dhaifalah, Ishraq; Salek, Tomas; Langova, Dagmar; Cuckle, Howard
2017-05-01
The article aimed to assess the benefit of incorporating maternal serum thyroid disease marker levels (thyroid-stimulating hormone and free thyroxine) into first trimester Down syndrome screening protocols. Statistical modelling was used to predict performance with and without the thyroid markers. Two protocols were considered: the combined test and the contingent cell-free DNA (cfDNA) test, where 15-40% women are selected for cfDNA because of increased risk based on combined test results. Published parameters were used for the combined test, cfDNA and the Down syndrome means for thyroid-stimulating hormone and free thyroxine; other parameters were derived from a series of 5230 women screened for both thyroid disease and Down syndrome. Combined test: For a fixed 85% detection rate, the predicted false positive rate was reduced from 5.3% to 3.6% with the addition of the thyroid markers. Contingent cfDNA test: For a fixed 95% detection rate, the proportion of women selected for cfDNA was reduced from 25.6% to 20.2%. When screening simultaneously for maternal thyroid disease and Down syndrome, thyroid marker levels should be used in the calculation of Down syndrome risk. The benefit is modest but can be achieved with no additional cost. © 2017 John Wiley & Sons, Ltd. © 2017 John Wiley & Sons, Ltd.
Continuous-variable protocol for oblivious transfer in the noisy-storage model.
Furrer, Fabian; Gehring, Tobias; Schaffner, Christian; Pacher, Christoph; Schnabel, Roman; Wehner, Stephanie
2018-04-13
Cryptographic protocols are the backbone of our information society. This includes two-party protocols which offer protection against distrustful players. Such protocols can be built from a basic primitive called oblivious transfer. We present and experimentally demonstrate here a quantum protocol for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced homodyne detection. Our work enables the implementation of arbitrary two-party quantum cryptographic protocols with continuous-variable communication systems.
Häme, Yrjö; Angelini, Elsa D.; Hoffman, Eric A.; Barr, R. Graham; Laine, Andrew F.
2014-01-01
The extent of pulmonary emphysema is commonly estimated from CT images by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions in the lung and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the present model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was used to quantify emphysema on a cohort of 87 subjects, with repeated CT scans acquired over a time period of 8 years using different imaging protocols. The scans were acquired approximately annually, and the data set included a total of 365 scans. The results show that the emphysema estimates produced by the proposed method have very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. In addition, the generated emphysema delineations promise great advantages for regional analysis of emphysema extent and progression, possibly advancing disease subtyping. PMID:24759984
Huser, Vojtech; Sastry, Chandan; Breymaier, Matthew; Idriss, Asma; Cimino, James J
2015-10-01
Efficient communication of a clinical study protocol and case report forms during all stages of a human clinical study is important for many stakeholders. An electronic and structured study representation format that can be used throughout the whole study life-span can improve such communication and potentially lower total study costs. The most relevant standard for representing clinical study data, applicable to unregulated as well as regulated studies, is the Operational Data Model (ODM) in development since 1999 by the Clinical Data Interchange Standards Consortium (CDISC). ODM's initial objective was exchange of case report forms data but it is increasingly utilized in other contexts. An ODM extension called Study Design Model, introduced in 2011, provides additional protocol representation elements. Using a case study approach, we evaluated ODM's ability to capture all necessary protocol elements during a complete clinical study lifecycle in the Intramural Research Program of the National Institutes of Health. ODM offers the advantage of a single format for institutions that deal with hundreds or thousands of concurrent clinical studies and maintain a data warehouse for these studies. For each study stage, we present a list of gaps in the ODM standard and identify necessary vendor or institutional extensions that can compensate for such gaps. The current version of ODM (1.3.2) has only partial support for study protocol and study registration data mainly because it is outside the original development goal. ODM provides comprehensive support for representation of case report forms (in both the design stage and with patient level data). Inclusion of requirements of observational, non-regulated or investigator-initiated studies (outside Food and Drug Administration (FDA) regulation) can further improve future revisions of the standard. Published by Elsevier Inc.
Hammond, Emily; Sloan, Chelsea; Newell, John D; Sieren, Jered P; Saylor, Melissa; Vidal, Craig; Hogue, Shayna; De Stefano, Frank; Sieren, Alexa; Hoffman, Eric A; Sieren, Jessica C
2017-09-01
Quantitative computed tomography (CT) measures are increasingly being developed and used to characterize lung disease. With recent advances in CT technologies, we sought to evaluate the quantitative accuracy of lung imaging at low- and ultralow-radiation doses with the use of iterative reconstruction (IR), tube current modulation (TCM), and spectral shaping. We investigated the effect of five independent CT protocols reconstructed with IR on quantitative airway measures and global lung measures using an in vivo large animal model as a human subject surrogate. A control protocol was chosen (NIH-SPIROMICS + TCM) and five independent protocols investigating TCM, low- and ultralow-radiation dose, and spectral shaping. For all scans, quantitative global parenchymal measurements (mean, median and standard deviation of the parenchymal HU, along with measures of emphysema) and global airway measurements (number of segmented airways and pi10) were generated. In addition, selected individual airway measurements (minor and major inner diameter, wall thickness, inner and outer area, inner and outer perimeter, wall area fraction, and inner equivalent circle diameter) were evaluated. Comparisons were made between control and target protocols using difference and repeatability measures. Estimated CT volume dose index (CTDIvol) across all protocols ranged from 7.32 mGy to 0.32 mGy. Low- and ultralow-dose protocols required more manual editing and resolved fewer airway branches; yet, comparable pi10 whole lung measures were observed across all protocols. Similar trends in acquired parenchymal and airway measurements were observed across all protocols, with increased measurement differences using the ultralow-dose protocols. However, for small airways (1.9 ± 0.2 mm) and medium airways (5.7 ± 0.4 mm), the measurement differences across all protocols were comparable to the control protocol repeatability across breath holds. Diameters, wall thickness, wall area fraction, and equivalent diameter had smaller measurement differences than area and perimeter measurements. In conclusion, the use of IR with low- and ultralow-dose CT protocols with CT volume dose indices down to 0.32 mGy maintains selected quantitative parenchymal and airway measurements relevant to pulmonary disease characterization. © 2017 American Association of Physicists in Medicine.
CT protocol management: simplifying the process by using a master protocol concept.
Szczykutowicz, Timothy P; Bour, Robert K; Rubert, Nicholas; Wendt, Gary; Pozniak, Myron; Ranallo, Frank N
2015-07-08
This article explains a method for creating CT protocols for a wide range of patient body sizes and clinical indications, using detailed tube current information from a small set of commonly used protocols. Analytical expressions were created relating CT technical acquisition parameters which can be used to create new CT protocols on a given scanner or customize protocols from one scanner to another. Plots of mA as a function of patient size for specific anatomical regions were generated and used to identify the tube output needs for patients as a function of size for a single master protocol. Tube output data were obtained from the DICOM header of clinical images from our PACS and patient size was measured from CT localizer radiographs under IRB approval. This master protocol was then used to create 11 additional master protocols. The 12 master protocols were further combined to create 39 single and multiphase clinical protocols. Radiologist acceptance rate of exams scanned using the clinical protocols was monitored for 12,857 patients to analyze the effectiveness of the presented protocol management methods using a two-tailed Fisher's exact test. A single routine adult abdominal protocol was used as the master protocol to create 11 additional master abdominal protocols of varying dose and beam energy. Situations in which the maximum tube current would have been exceeded are presented, and the trade-offs between increasing the effective tube output via 1) decreasing pitch, 2) increasing the scan time, or 3) increasing the kV are discussed. Out of 12 master protocols customized across three different scanners, only one had a statistically significant acceptance rate that differed from the scanner it was customized from. The difference, however, was only 1% and was judged to be negligible. All other master protocols differed in acceptance rate insignificantly between scanners. The methodology described in this paper allows a small set of master protocols to be adapted among different clinical indications on a single scanner and among different CT scanners.
Clark, Sean; Iltis, Peter W
2008-05-01
Controlled laboratory study. To compare postural performance measures of athletes with those of nonathletes when completing the standard Sensory Organization Test (SOT) and a modified SOT that included dynamic head tilts (DHT-SOT). Authors of recently published research have suggested that modifications to the SOT protocol (eg, introduction of pitch and roll head tilts) may enhance the test's sensitivity when assessing postural stability in individuals with higher balance capabilities or with well-compensated sensory deficits. Nineteen athletes and 19 nonathletes (group) completed both the SOT and DHT-SOT (protocol). During the SOT, participants stood upright as steadily as possible for 20 seconds during each of 6 different sensory conditions. As a variation of the SOT, the DHT-SOT incorporated active pitch and roll head tilts into the SOT protocol. Four 2-way mixed-model analyses of variance (with protocol as the repeated factor) were performed to determine if the composite equilibrium score or the visual, vestibular, or somatosensory ratio scores differed between the 2 groups across the 2 testing protocols. Significant group-by-protocol interaction effects were present for both the composite equilibrium score and visual ratio. Follow-up simple main-effects analyses indicated that these measures did not differ between groups for the SOT protocol but were significantly different on the DHT-SOT. The addition of dynamic head tilts to the SOT protocol resulted in subtle differences in balance function between athletes and nonathletes. Athletes demonstrated an increased ability to adapt to sensory disruptions during the DHT-SOT. Therapists should consider including active pitch and roll head tilts to the SOT when evaluating individuals with higher balance function or to detect subtle deficits in balance function. Diagnosis, level 3b.
Reynolds, Michelle H.; Brinck, Kevin W.; Laniawe, Leona
2011-01-01
To improve the Laysan Teal population estimates, we recommend changes to the monitoring protocol. Additional years of data are needed to quantify inter-annual seasonal detection probabilities, which may allow the use of standardized direct counts as an unbiased index of population size. Survey protocols should be enhanced through frequent resights, regular survey intervals, and determining reliable standards to detect catastrophic declines and annual changes in adult abundance. In late 2009 to early 2010, 68% of the population was marked with unique color band combinations. This allowed for potentially accurate adult population estimates and survival estimates without the need to mark new birds in 2010, 2011, and possibly 2012. However, efforts should be made to replace worn or illegible bands so birds can be identified in future surveys. It would be valuable to develop more sophisticated population size and survival models using Program MARK, a state-of-the-art software package which uses likelihood models to analyze mark-recapture data. This would allow for more reliable adult population and survival estimates to compare with the ―source‖ Laysan Teal population on Laysan Island. These models will require additional years of resight data (> 1 year) and, in some cases, an intensive annual effort of marking and recapture. Because data indicate standardized all-wetland counts are a poor index of abundance, monitoring efforts could be improved by expanding resight surveys to include all wetlands, discontinuing the all-wetland counts, and reallocating some of the wetland count effort to collect additional opportunistic resights. Approximately two years of additional bimonthly surveys are needed to validate the direct count as an appropriate index of population abundance. Additional years of individual resight data will allow estimates of adult population size, as specified in recovery criteria, and to track species population dynamics at Midway Atoll.
Zhu, Yenan; Hsieh, Yee-Hsee; Dhingra, Rishi R; Dick, Thomas E; Jacono, Frank J; Galán, Roberto F
2013-02-01
Interactions between oscillators can be investigated with standard tools of time series analysis. However, these methods are insensitive to the directionality of the coupling, i.e., the asymmetry of the interactions. An elegant alternative was proposed by Rosenblum and collaborators [M. G. Rosenblum, L. Cimponeriu, A. Bezerianos, A. Patzak, and R. Mrowka, Phys. Rev. E 65, 041909 (2002); M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. E 64, 045202 (2001)] which consists in fitting the empirical phases to a generic model of two weakly coupled phase oscillators. This allows one to obtain the interaction functions defining the coupling and its directionality. A limitation of this approach is that a solution always exists in the least-squares sense, even in the absence of coupling. To preclude spurious results, we propose a three-step protocol: (1) Determine if a statistical dependency exists in the data by evaluating the mutual information of the phases; (2) if so, compute the interaction functions of the oscillators; and (3) validate the empirical oscillator model by comparing the joint probability of the phases obtained from simulating the model with that of the empirical phases. We apply this protocol to a model of two coupled Stuart-Landau oscillators and show that it reliably detects genuine coupling. We also apply this protocol to investigate cardiorespiratory coupling in anesthetized rats. We observe reciprocal coupling between respiration and heartbeat and that the influence of respiration on the heartbeat is generally much stronger than vice versa. In addition, we find that the vagus nerve mediates coupling in both directions.
Effects of transcranial direct current stimulation for treating depression: A modeling study.
Csifcsák, Gábor; Boayue, Nya Mehnwolo; Puonti, Oula; Thielscher, Axel; Mittner, Matthias
2018-07-01
Transcranial direct current stimulation (tDCS) above the left dorsolateral prefrontal cortex (lDLPFC) has been widely used to improve symptoms of major depressive disorder (MDD). However, the effects of different stimulation protocols in the entire frontal lobe have not been investigated in a large sample including patient data. We used 38 head models created from structural magnetic resonance imaging data of 19 healthy adults and 19 MDD patients and applied computational modeling to simulate the spatial distribution of tDCS-induced electric fields (EFs) in 20 frontal regions. We evaluated effects of seven bipolar and two multi-electrode 4 × 1 tDCS protocols. For bipolar montages, EFs were of comparable strength in the lDLPFC and in the medial prefrontal cortex (MPFC). Depending on stimulation parameters, EF cortical maps varied to a considerable degree, but were found to be similar in controls and patients. 4 × 1 montages produced more localized, albeit weaker effects. White matter anisotropy was not modeled. The relationship between EF strength and clinical response to tDCS could not be evaluated. In addition to lDLPFC stimulation, excitability changes in the MPFC should also be considered as a potential mechanism underlying clinical efficacy of bipolar montages. MDD-associated anatomical variations are not likely to substantially influence current flow. Individual modeling of tDCS protocols can substantially improve cortical targeting. We make recommendations for future research to explicitly test the contribution of lDLPFC vs. MPFC stimulation to therapeutic outcomes of tDCS in this disorder. Copyright © 2018 Elsevier B.V. All rights reserved.
Relativistic quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N., E-mail: molotkov@issp.ac.ru
2011-03-15
A new protocol of quantum key distribution is proposed to transmit keys through free space. Along with quantum-mechanical restrictions on the discernibility of nonorthogonal quantum states, the protocol uses additional restrictions imposed by special relativity theory. Unlike all existing quantum key distribution protocols, this protocol ensures key secrecy for a not strictly one-photon source of quantum states and an arbitrary length of a quantum communication channel.
Modeling Techniques for High Dependability Protocols and Architecture
NASA Technical Reports Server (NTRS)
LaValley, Brian; Ellis, Peter; Walter, Chris J.
2012-01-01
This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.
Metal- and additive-free photoinduced borylation of haloarenes.
Mfuh, Adelphe M; Schneider, Brett D; Cruces, Westley; Larionov, Oleg V
2017-03-01
Boronic acids and esters have critical roles in the areas of synthetic organic chemistry, molecular sensors, materials science, drug discovery, and catalysis. Many of the current applications of boronic acids and esters require materials with very low levels of transition metal contamination. Most of the current methods for the synthesis of boronic acids, however, require transition metal catalysts and ligands that must be removed via additional purification procedures. This protocol describes a simple, metal- and additive-free method of conversion of haloarenes directly to boronic acids and esters. This photoinduced borylation protocol does not require expensive and toxic metal catalysts or ligands, and it produces innocuous and easy-to-remove by-products. Furthermore, the reaction can be carried out on multigram scales in common-grade solvents without the need for reaction mixtures to be deoxygenated. The setup and purification steps are typically accomplished within 1-3 h. The reactions can be run overnight, and the protocol can be completed within 13-16 h. Two representative procedures that are described in this protocol provide details for preparation of a boronic acid (3-cyanopheylboronic acid) and a boronic ester (1,4-benzenediboronic acid bis(pinacol)ester). We also discuss additional details of the method that will be helpful in the application of the protocol to other haloarene substrates.
Brito, Maíra M; Lúcio, Cristina F; Angrimani, Daniel S R; Losano, João Diego A; Dalmazzo, Andressa; Nichi, Marcílio; Vannucchi, Camila I
2017-01-02
In addition to the existence of several cryopreservation protocols, no systematic research has been carried out in order to confirm the suitable protocol for canine sperm. This study aims to assess the effect of adding 5% glycerol during cryopreservation at 37°C (one-step) and 5°C (two-steps), in addition of testing two thawing protocols (37°C for 30 seconds, and 70°C for 8 seconds). We used 12 sperm samples divided into four experimental groups: Single-Step - Slow Thawing Group; Two-Step - Slow Thawing Group; Single-Step - Fast Thawing Group; and Two-Step - Fast Thawing Group. Frozen-thawed samples were submitted to automated analysis of sperm motility, evaluation of plasmatic membrane integrity, acrosomal integrity, mitochondrial activity, sperm morphology, sperm susceptibility to oxidative stress, and sperm binding assay to perivitellinic membrane of chicken egg yolk. Considering the comparison between freezing protocols, no statistical differences were verified for any of the response variables. When comparison between thawing protocols was performed, slow thawing protocol presented higher sperm count bound to perivitelline membrane of chicken egg yolk, compared to fast thawing protocol. Regardless of the freezing process, the slow thawing protocol can be recommended for the large scale cryopreservation of canine semen, since it shows a consistent better functional result.
Quantitative image feature variability amongst CT scanners with a controlled scan protocol
NASA Astrophysics Data System (ADS)
Ger, Rachel B.; Zhou, Shouhao; Chi, Pai-Chun Melinda; Goff, David L.; Zhang, Lifei; Lee, Hannah J.; Fuller, Clifton D.; Howell, Rebecca M.; Li, Heng; Stafford, R. Jason; Court, Laurence E.; Mackin, Dennis S.
2018-02-01
Radiomics studies often analyze patient computed tomography (CT) images acquired from different CT scanners. This may result in differences in imaging parameters, e.g. different manufacturers, different acquisition protocols, etc. However, quantifiable differences in radiomics features can occur based on acquisition parameters. A controlled protocol may allow for minimization of these effects, thus allowing for larger patient cohorts from many different CT scanners. In order to test radiomics feature variability across different CT scanners a radiomics phantom was developed with six different cartridges encased in high density polystyrene. A harmonized protocol was developed to control for tube voltage, tube current, scan type, pitch, CTDIvol, convolution kernel, display field of view, and slice thickness across different manufacturers. The radiomics phantom was imaged on 18 scanners using the control protocol. A linear mixed effects model was created to assess the impact of inter-scanner variability with decomposition of feature variation between scanners and cartridge materials. The inter-scanner variability was compared to the residual variability (the unexplained variability) and to the inter-patient variability using two different patient cohorts. The patient cohorts consisted of 20 non-small cell lung cancer (NSCLC) and 30 head and neck squamous cell carcinoma (HNSCC) patients. The inter-scanner standard deviation was at least half of the residual standard deviation for 36 of 49 quantitative image features. The ratio of inter-scanner to patient coefficient of variation was above 0.2 for 22 and 28 of the 49 features for NSCLC and HNSCC patients, respectively. Inter-scanner variability was a significant factor compared to patient variation in this small study for many of the features. Further analysis with a larger cohort will allow more thorough analysis with additional variables in the model to truly isolate the interscanner difference.
Nonblocking and orphan free message logging protocols
NASA Technical Reports Server (NTRS)
Alvisi, Lorenzo; Hoppe, Bruce; Marzullo, Keith
1992-01-01
Currently existing message logging protocols demonstrate a classic pessimistic vs. optimistic tradeoff. We show that the optimistic-pessimistic tradeoff is not inherent to the problem of message logging. We construct a message-logging protocol that has the positive features of both optimistic and pessimistic protocol: our protocol prevents orphans and allows simple failure recovery; however, it requires no blocking in failure-free runs. Furthermore, this protocol does not introduce any additional message overhead as compared to one implemented for a system in which messages may be lost but processes do not crash.
Nonblocking and orphan free message logging protocols
NASA Astrophysics Data System (ADS)
Alvisi, Lorenzo; Hoppe, Bruce; Marzullo, Keith
1992-12-01
Currently existing message logging protocols demonstrate a classic pessimistic vs. optimistic tradeoff. We show that the optimistic-pessimistic tradeoff is not inherent to the problem of message logging. We construct a message-logging protocol that has the positive features of both optimistic and pessimistic protocol: our protocol prevents orphans and allows simple failure recovery; however, it requires no blocking in failure-free runs. Furthermore, this protocol does not introduce any additional message overhead as compared to one implemented for a system in which messages may be lost but processes do not crash.
NASA Technical Reports Server (NTRS)
Clare, Loren; Clement, B.; Gao, J.; Hutcherson, J.; Jennings, E.
2006-01-01
Described recent development of communications protocols, services, and associated tools targeted to reduce risk, reduce cost and increase efficiency of IND infrastructure and supported mission operations. Space-based networking technologies developed were: a) Provide differentiated quality of service (QoS) that will give precedence to traffic that users have selected as having the greatest importance and/or time-criticality; b) Improve the total value of information to users through the use of QoS prioritization techniques; c) Increase operational flexibility and improve command-response turnaround; d) Enable new class of networked and collaborative science missions; e) Simplify applications interfaces to communications services; and f) Reduce risk and cost from a common object model and automated scheduling and communications protocols. Technologies are described in three general areas: communications scheduling, middleware, and protocols. Additionally developed simulation environment, which provides comprehensive, quantitative understanding of the technologies performance within overall, evolving architecture, as well as ability to refine & optimize specific components.
Ip, David
2015-12-01
The current study evaluates whether the addition of low-level laser therapy into standard conventional physical therapy in elderly with bilateral symptomatic tri-compartmental knee arthritis can successfully postpone the need for joint replacement surgery. A prospective randomized cohort study of 100 consecutive unselected elderly patients with bilateral symptomatic knee arthritis with each knee randomized to receive either treatment protocol A consisting of conventional physical therapy or protocol B which is the same as protocol A with added low-level laser therapy. The mean follow-up was 6 years. Treatment failure was defined as breakthrough pain which necessitated joint replacement surgery. After a follow-up of 6 years, patients clearly benefited from treatment with protocol B as only one knee needed joint replacement surgery, while nine patients treated with protocol A needed surgery (p < 0.05). We conclude low-level laser therapy should be incorporated into standard conservative treatment protocol for symptomatic knee arthritis.
Protocol for determining bull trout presence
Peterson, James; Dunham, Jason B.; Howell, Philip; Thurow, Russell; Bonar, Scott
2002-01-01
The Western Division of the American Fisheries Society was requested to develop protocols for determining presence/absence and potential habitat suitability for bull trout. The general approach adopted is similar to the process for the marbled murrelet, whereby interim guidelines are initially used, and the protocols are subsequently refined as data are collected. Current data were considered inadequate to precisely identify suitable habitat but could be useful in stratifying sampling units for presence/absence surveys. The presence/absence protocol builds on previous approaches (Hillman and Platts 1993; Bonar et al. 1997), except it uses the variation in observed bull trout densities instead of a minimum threshold density and adjusts for measured differences in sampling efficiency due to gear types and habitat characteristics. The protocol consists of: 1. recommended sample sizes with 80% and 95% detection probabilities for juvenile and resident adult bull trout for day and night snorkeling and electrofishing adjusted for varying habitat characteristics for 50m and 100m sampling units, 2. sampling design considerations, including possible habitat characteristics for stratification, 3. habitat variables to be measured in the sampling units, and 3. guidelines for training sampling crews. Criteria for habitat strata consist of coarse, watershed-scale characteristics (e.g., mean annual air temperature) and fine-scale, reach and habitat-specific features (e.g., water temperature, channel width). The protocols will be revised in the future using data from ongoing presence/absence surveys, additional research on sampling efficiencies, and development of models of habitat/species occurrence.
Use of the HR index to predict maximal oxygen uptake during different exercise protocols.
Haller, Jeannie M; Fehling, Patricia C; Barr, David A; Storer, Thomas W; Cooper, Christopher B; Smith, Denise L
2013-10-01
This study examined the ability of the HRindex model to accurately predict maximal oxygen uptake ([Formula: see text]O2max) across a variety of incremental exercise protocols. Ten men completed five incremental protocols to volitional exhaustion. Protocols included three treadmill (Bruce, UCLA running, Wellness Fitness Initiative [WFI]), one cycle, and one field (shuttle) test. The HRindex prediction equation (METs = 6 × HRindex - 5, where HRindex = HRmax/HRrest) was used to generate estimates of energy expenditure, which were converted to body mass-specific estimates of [Formula: see text]O2max. Estimated [Formula: see text]O2max was compared with measured [Formula: see text]O2max. Across all protocols, the HRindex model significantly underestimated [Formula: see text]O2max by 5.1 mL·kg(-1)·min(-1) (95% CI: -7.4, -2.7) and the standard error of the estimate (SEE) was 6.7 mL·kg(-1)·min(-1). Accuracy of the model was protocol-dependent, with [Formula: see text]O2max significantly underestimated for the Bruce and WFI protocols but not the UCLA, Cycle, or Shuttle protocols. Although no significant differences in [Formula: see text]O2max estimates were identified for these three protocols, predictive accuracy among them was not high, with root mean squared errors and SEEs ranging from 7.6 to 10.3 mL·kg(-1)·min(-1) and from 4.5 to 8.0 mL·kg(-1)·min(-1), respectively. Correlations between measured and predicted [Formula: see text]O2max were between 0.27 and 0.53. Individual prediction errors indicated that prediction accuracy varied considerably within protocols and among participants. In conclusion, across various protocols the HRindex model significantly underestimated [Formula: see text]O2max in a group of aerobically fit young men. Estimates generated using the model did not differ from measured [Formula: see text]O2max for three of the five protocols studied; nevertheless, some individual prediction errors were large. The lack of precision among estimates may limit the utility of the HRindex model; however, further investigation to establish the model's predictive accuracy is warranted.
Real-Time QoS Routing Protocols in Wireless Multimedia Sensor Networks: Study and Analysis.
Alanazi, Adwan; Elleithy, Khaled
2015-09-02
Many routing protocols have been proposed for wireless sensor networks. These routing protocols are almost always based on energy efficiency. However, recent advances in complementary metal-oxide semiconductor (CMOS) cameras and small microphones have led to the development of Wireless Multimedia Sensor Networks (WMSN) as a class of wireless sensor networks which pose additional challenges. The transmission of imaging and video data needs routing protocols with both energy efficiency and Quality of Service (QoS) characteristics in order to guarantee the efficient use of the sensor nodes and effective access to the collected data. Also, with integration of real time applications in Wireless Senor Networks (WSNs), the use of QoS routing protocols is not only becoming a significant topic, but is also gaining the attention of researchers. In designing an efficient QoS routing protocol, the reliability and guarantee of end-to-end delay are critical events while conserving energy. Thus, considerable research has been focused on designing energy efficient and robust QoS routing protocols. In this paper, we present a state of the art research work based on real-time QoS routing protocols for WMSNs that have already been proposed. This paper categorizes the real-time QoS routing protocols into probabilistic and deterministic protocols. In addition, both categories are classified into soft and hard real time protocols by highlighting the QoS issues including the limitations and features of each protocol. Furthermore, we have compared the performance of mobility-aware query based real-time QoS routing protocols from each category using Network Simulator-2 (NS2). This paper also focuses on the design challenges and future research directions as well as highlights the characteristics of each QoS routing protocol.
Real-Time QoS Routing Protocols in Wireless Multimedia Sensor Networks: Study and Analysis
Alanazi, Adwan; Elleithy, Khaled
2015-01-01
Many routing protocols have been proposed for wireless sensor networks. These routing protocols are almost always based on energy efficiency. However, recent advances in complementary metal-oxide semiconductor (CMOS) cameras and small microphones have led to the development of Wireless Multimedia Sensor Networks (WMSN) as a class of wireless sensor networks which pose additional challenges. The transmission of imaging and video data needs routing protocols with both energy efficiency and Quality of Service (QoS) characteristics in order to guarantee the efficient use of the sensor nodes and effective access to the collected data. Also, with integration of real time applications in Wireless Senor Networks (WSNs), the use of QoS routing protocols is not only becoming a significant topic, but is also gaining the attention of researchers. In designing an efficient QoS routing protocol, the reliability and guarantee of end-to-end delay are critical events while conserving energy. Thus, considerable research has been focused on designing energy efficient and robust QoS routing protocols. In this paper, we present a state of the art research work based on real-time QoS routing protocols for WMSNs that have already been proposed. This paper categorizes the real-time QoS routing protocols into probabilistic and deterministic protocols. In addition, both categories are classified into soft and hard real time protocols by highlighting the QoS issues including the limitations and features of each protocol. Furthermore, we have compared the performance of mobility-aware query based real-time QoS routing protocols from each category using Network Simulator-2 (NS2). This paper also focuses on the design challenges and future research directions as well as highlights the characteristics of each QoS routing protocol. PMID:26364639
NASA Astrophysics Data System (ADS)
Toapanta, Moisés; Mafla, Enrique; Orizaga, Antonio
2017-08-01
We analyzed the problems of security of the information of the civil registries and identification at world level that are considered strategic. The objective is to adopt the appropriate security protocols in a conceptual model in the identity management for the Civil Registry of Ecuador. In this phase, the appropriate security protocols were determined in a Conceptual Model in Identity Management with Authentication, Authorization and Auditing (AAA). We used the deductive method and exploratory research to define the appropriate security protocols to be adopted in the identity model: IPSec, DNSsec, Radius, SSL, TLS, IEEE 802.1X EAP, Set. It was a prototype of the location of the security protocols adopted in the logical design of the technological infrastructure considering the conceptual model for Identity, Authentication, Authorization, and Audit management. It was concluded that the adopted protocols are appropriate for a distributed database and should have a direct relationship with the algorithms, which allows vulnerability and risk mitigation taking into account confidentiality, integrity and availability (CIA).
77 FR 2713 - Agency Information Collection Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-19
...) Package Title: U.S. Declaration under the Protocol Additional to the U.S.-IAEA Safeguards Agreement... Declaration to the International Atomic Energy Agency (IAEA) under Articles 2 and 3 of the Protocol Additional... performing activities at DOE Locations that would be declarable to the IAEA under the U.S. AP are affected by...
Practical Approaches to Protein Folding and Assembly
Walters, Jad; Milam, Sara L.; Clark, A. Clay
2009-01-01
We describe here the use of several spectroscopies, such as fluorescence emission, circular dichroism, and differential quenching by acrylamide, in examining the equilibrium and kinetic folding of proteins. The first section regarding equilibrium techniques provides practical information for determining the conformational stability of a protein. In addition, several equilibrium-folding models are discussed, from two-state monomer to four-state homodimer, providing a comprehensive protocol for interpretation of folding curves. The second section focuses on the experimental design and interpretation of kinetic data, such as burst-phase analysis and exponential fits, used in elucidating kinetic folding pathways. In addition, simulation programs are used routinely to support folding models generated by kinetic experiments, and the fundamentals of simulations are covered. PMID:19289201
Password-only authenticated three-party key exchange with provable security in the standard model.
Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.
1992-12-21
in preparation). Foundations of artificial intelligence. Cambridge, MA: MIT Press. O’Reilly, R. C. (1991). X3DNet: An X- Based Neural Network ...2.2.3 Trace based protocol analysis 19 2.2A Summary of important data features 21 2.3 Tools related to process model testing 23 2.3.1 Tools for building...algorithm 57 3. Requirements for testing process models using trace based protocol 59 analysis 3.1 Definition of trace based protocol analysis (TBPA) 59
PROTOCOL FOR EXAMINATION OF THE INNER CAN CLOSURE WELD REGION FOR 3013 DE CONTAINERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mickalonis, J.
2014-09-16
The protocol for the examination of the inner can closure weld region (ICCWR) for 3013 DE containers is presented within this report. The protocol includes sectioning of the inner can lid section, documenting the surface condition, measuring corrosion parameters, and storing of samples. This protocol may change as the investigation develops since findings may necessitate additional steps be taken. Details of the previous analyses, which formed the basis for this protocol, are also presented.
QKD-based quantum private query without a failure probability
NASA Astrophysics Data System (ADS)
Liu, Bin; Gao, Fei; Huang, Wei; Wen, QiaoYan
2015-10-01
In this paper, we present a quantum-key-distribution (QKD)-based quantum private query (QPQ) protocol utilizing single-photon signal of multiple optical pulses. It maintains the advantages of the QKD-based QPQ, i.e., easy to implement and loss tolerant. In addition, different from the situations in the previous QKD-based QPQ protocols, in our protocol, the number of the items an honest user will obtain is always one and the failure probability is always zero. This characteristic not only improves the stability (in the sense that, ignoring the noise and the attack, the protocol would always succeed), but also benefits the privacy of the database (since the database will no more reveal additional secrets to the honest users). Furthermore, for the user's privacy, the proposed protocol is cheat sensitive, and for security of the database, we obtain an upper bound for the leaked information of the database in theory.
Weiss, Jakob; Martirosian, Petros; Notohamiprodjo, Mike; Kaufmann, Sascha; Othman, Ahmed E; Grosse, Ulrich; Nikolaou, Konstantin; Gatidis, Sergios
2018-03-01
The aims of this study were to establish a 5-minute magnetic resonance (MR) screening protocol for prostate cancer in men before biopsy and to evaluate effects on Prostate Imaging Reporting and Data System (PI-RADS) V2 scoring in comparison to a conventional, fully diagnostic multiparametric MR imaging (mpMRI) approach. Fifty-two patients with elevated prostate-specific antigen levels and without prior biopsy were prospectively included in this institutional review board-approved study. In all patients, an mpMRI protocol according to the PI-RADS recommendations was acquired on a 3 T MRI system. In addition, an accelerated diffusion-weighted imaging sequence was acquired using simultaneous multislice technique (DW-EPISMS). Two readers independently evaluated the images for the presence/absence of prostate cancer according to the PI-RADS criteria and for additional findings. In a first reading session, only the screening protocol consisting of axial T2-weighted and DW-EPISMS images was made available. In a subsequent reading session, the mpMRI protocol was assessed blinded to the results of the first reading, serving as reference standard. Both readers successfully established a final diagnosis according to the PI-RADS criteria in the screening and mpMRI protocol. Mean lesion size was 1.2 cm in the screening and 1.4 cm in the mpMRI protocol (P = 0.4) with 35% (18/52) of PI-RADS IV/V lesions. Diagnostic performance of the screening protocol was excellent with a sensitivity and specificity of 100% for both readers with no significant differences in comparison to the mpMRI standard (P = 1.0). In 3 patients, suspicious lymph nodes were reported as additional finding, which were equally detectable in the screening and mpMRI protocol. A 5-minute MR screening protocol for prostate cancer in men with elevated prostate-specific antigen levels before biopsy is applicable for clinical routine with similar diagnostic performance as the full diagnostic mpMRI approach.
Model Checking A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2012-01-01
This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV) for a subset of digraphs. Modeling challenges of the protocol and the system are addressed. The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period.
NASA Astrophysics Data System (ADS)
Tenenbaum-Katan, Janna; Hofemeier, Philipp; Sznitman, Josué; Janna Tenenbaum-Katan Team
2015-11-01
Inhalation therapy is the cornerstone of early-childhood respiratory treatments, as well as a rising potential for systemic drug delivery and pulmonary vaccination. As such, indispensable understanding of respiratory flow phenomena, coupled with particle transport at the deep regions of children's lungs is necessary to attain efficient targeting of aerosol therapy. However, fundamental research of pulmonary transport is overwhelmingly focused on adults. In our study, we have developed an anatomically-inspired computational model of representing pulmonary acinar regions at several age points during a child's development. Our numerical simulations examine respiratory flows and particle deposition maps within the acinar model, accounting for varying age dependant anatomical considerations and ventilation patterns. Resulting deposition maps of aerosols alter with age, such findings might suggest that medication protocols of inhalation therapy in young children should be considered to be accordingly amended with the child's development. Additionally to understanding basic scientific concepts of age effects on aerosol deposition, our research can potentially contribute practical guidelines to therapy protocols, and its' necessary modifications with age. We acknowledge the support of the ISF and the Israeli ministry of Science.
NASA Astrophysics Data System (ADS)
Grudinin, Sergei; Kadukova, Maria; Eisenbarth, Andreas; Marillet, Simon; Cazals, Frédéric
2016-09-01
The 2015 D3R Grand Challenge provided an opportunity to test our new model for the binding free energy of small molecules, as well as to assess our protocol to predict binding poses for protein-ligand complexes. Our pose predictions were ranked 3-9 for the HSP90 dataset, depending on the assessment metric. For the MAP4K dataset the ranks are very dispersed and equal to 2-35, depending on the assessment metric, which does not provide any insight into the accuracy of the method. The main success of our pose prediction protocol was the re-scoring stage using the recently developed Convex-PL potential. We make a thorough analysis of our docking predictions made with AutoDock Vina and discuss the effect of the choice of rigid receptor templates, the number of flexible residues in the binding pocket, the binding pocket size, and the benefits of re-scoring. However, the main challenge was to predict experimentally determined binding affinities for two blind test sets. Our affinity prediction model consisted of two terms, a pairwise-additive enthalpy, and a non pairwise-additive entropy. We trained the free parameters of the model with a regularized regression using affinity and structural data from the PDBBind database. Our model performed very well on the training set, however, failed on the two test sets. We explain the drawback and pitfalls of our model, in particular in terms of relative coverage of the test set by the training set and missed dynamical properties from crystal structures, and discuss different routes to improve it.
Bertoglio, Daniele; Amhaoul, Halima; Van Eetveldt, Annemie; Houbrechts, Ruben; Van De Vijver, Sebastiaan; Ali, Idrish; Dedeurwaerdere, Stefanie
2017-01-01
The aim of epilepsy models is to investigate disease ontogenesis and therapeutic interventions in a consistent and prospective manner. The kainic acid-induced status epilepticus (KASE) rat model is a widely used, well-validated model for temporal lobe epilepsy (TLE). As we noted significant variability within the model between labs potentially related to the rat strain used, we aimed to describe two variants of this model with diverging seizure phenotype and neuropathology. In addition, we evaluated two different protocols to induce status epilepticus (SE). Wistar Han (Charles River, France) and Sprague-Dawley (Harlan, The Netherlands) rats were subjected to KASE using the Hellier kainic acid (KA) and a modified injection scheme. Duration of SE and latent phase were characterized by video-electroencephalography (vEEG) in a subgroup of animals, while animals were sacrificed 1 week (subacute phase) and 12 weeks (chronic phase) post-SE. In the 12 weeks post-SE groups, seizures were monitored with vEEG. Neuronal loss (neuronal nuclei), microglial activation (OX-42 and translocator protein), and neurodegeneration (Fluorojade C) were assessed. First, the Hellier protocol caused very high mortality in WH/CR rats compared to SD/H animals. The modified protocol resulted in a similar SE severity for WH/CR and SD/H rats, but effectively improved survival rates. The latent phase was significantly shorter (p < 0.0001) in SD/H (median 8.3 days) animals compared to WH/CR (median 15.4 days). During the chronic phase, SD/H rats had more seizures/day compared to WH/CR animals (p < 0.01). However, neuronal degeneration and cell loss were overall more extensive in WH/CR than in SD/H rats; microglia activation was similar between the two strains 1 week post-SE, but higher in WH/CR rats 12 weeks post-SE. These neuropathological differences may be more related to the distinct neurotoxic effects of KA in the two rat strains than being the outcome of seizure burden itself. The divergences in disease progression and seizure outcome, in addition to the histopathological dissimilarities, further substantiate the existence of strain differences for the KASE rat model of TLE. PMID:29163349
NASA SpaceWire Activities/Comments/Recommendations
NASA Technical Reports Server (NTRS)
Rakow, Glenn
2006-01-01
This viewgraph presentation reviews NASA's activities, and proposes recommendations for the further use of the SpaceWire (SpW). The areas covered in this presentation are: (1) Protocol ID assignment, (2) Protocol development, (3) Plug & Play (PnP), (4) Recommended additions t o SpW protocol and (5) SpaceFibre trade.
MTP: An atomic multicast transport protocol
NASA Technical Reports Server (NTRS)
Freier, Alan O.; Marzullo, Keith
1990-01-01
Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.
A Model Based Security Testing Method for Protocol Implementation
Fu, Yu Long; Xin, Xiao Long
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163
A model based security testing method for protocol implementation.
Fu, Yu Long; Xin, Xiao Long
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.
A novel integrated framework and improved methodology of computer-aided drug design.
Chen, Calvin Yu-Chian
2013-01-01
Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.
An ultra-low power wireless sensor network for bicycle torque performance measurements.
Gharghan, Sadik K; Nordin, Rosdiadee; Ismail, Mahamod
2015-05-21
In this paper, we propose an energy-efficient transmission technique known as the sleep/wake algorithm for a bicycle torque sensor node. This paper aims to highlight the trade-off between energy efficiency and the communication range between the cyclist and coach. Two experiments were conducted. The first experiment utilised the Zigbee protocol (XBee S2), and the second experiment used the Advanced and Adaptive Network Technology (ANT) protocol based on the Nordic nRF24L01 radio transceiver chip. The current consumption of ANT was measured, simulated and compared with a torque sensor node that uses the XBee S2 protocol. In addition, an analytical model was derived to correlate the sensor node average current consumption with a crank arm cadence. The sensor node achieved 98% power savings for ANT relative to ZigBee when they were compared alone, and the power savings amounted to 30% when all components of the sensor node are considered. The achievable communication range was 65 and 50 m for ZigBee and ANT, respectively, during measurement on an outdoor cycling track (i.e., velodrome). The conclusions indicate that the ANT protocol is more suitable for use in a torque sensor node when power consumption is a crucial demand, whereas the ZigBee protocol is more convenient in ensuring data communication between cyclist and coach.
An Ultra-Low Power Wireless Sensor Network for Bicycle Torque Performance Measurements
Gharghan, Sadik K.; Nordin, Rosdiadee; Ismail, Mahamod
2015-01-01
In this paper, we propose an energy-efficient transmission technique known as the sleep/wake algorithm for a bicycle torque sensor node. This paper aims to highlight the trade-off between energy efficiency and the communication range between the cyclist and coach. Two experiments were conducted. The first experiment utilised the Zigbee protocol (XBee S2), and the second experiment used the Advanced and Adaptive Network Technology (ANT) protocol based on the Nordic nRF24L01 radio transceiver chip. The current consumption of ANT was measured, simulated and compared with a torque sensor node that uses the XBee S2 protocol. In addition, an analytical model was derived to correlate the sensor node average current consumption with a crank arm cadence. The sensor node achieved 98% power savings for ANT relative to ZigBee when they were compared alone, and the power savings amounted to 30% when all components of the sensor node are considered. The achievable communication range was 65 and 50 m for ZigBee and ANT, respectively, during measurement on an outdoor cycling track (i.e., velodrome). The conclusions indicate that the ANT protocol is more suitable for use in a torque sensor node when power consumption is a crucial demand, whereas the ZigBee protocol is more convenient in ensuring data communication between cyclist and coach. PMID:26007728
Gas embolization of the liver in a rat model of rapid decompression.
L'Abbate, Antonio; Kusmic, Claudia; Matteucci, Marco; Pelosi, Gualtiero; Navari, Alessandro; Pagliazzo, Antonino; Longobardi, Pasquale; Bedini, Remo
2010-08-01
Occurrence of liver gas embolism after rapid decompression was assessed in 31 female rats that were decompressed in 12 min after 42 min of compression at 7 ATA (protocol A). Sixteen rats died after decompression (group I). Of the surviving rats, seven were killed at 3 h (group II), and eight at 24 h (group III). In group I, bubbles were visible in the right heart, aortic arch, liver, and mesenteric veins and on the intestinal surface. Histology showed perilobular microcavities in sinusoids, interstitial spaces, and hepatocytes. In group II, liver gas was visible in two rats. Perilobular vacuolization and significant plasma aminotransferase increase were present. In group III, liver edema was evident at gross examination in all cases. Histology showed perilobular cell swelling, vacuolization, or hydropic degeneration. Compared with basal, enzymatic markers of liver damage increased significantly. An additional 14 rats were decompressed twice (protocol B). Overall mortality was 93%. In addition to diffuse hydropic degeneration, centrilobular necrosis was frequently observed after the second decompression. Additionally, 10 rats were exposed to three decompression sessions (protocol C) with doubled decompression time. Their mortality rate decreased to 20%, but enzymatic markers still increased in surviving rats compared with predecompression, and perilobular cell swelling and vacuolization were present in five rats. Study challenges were 1) liver is not part of the pathophysiology of decompression in the existing paradigm, and 2) although significant cellular necrosis was observed in few animals, zonal or diffuse hepatocellular damage associated with liver dysfunction was frequently demonstrated. Liver participation in human decompression sickness should be looked for and clinically evaluated.
Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols
Nam, Junghyun; Kim, Moonseong
2014-01-01
We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863
Evans, Zachary P; Renne, Walter G; Bacro, Thierry R; Mennito, Anthony S; Ludlow, Mark E; Lecholop, Michael K
2018-02-01
Existing root-analog dental implant systems have no standardized protocols regarding retentive design, surface manipulation, or prosthetic attachment design relative to the site's unique anatomy. Historically, existing systems made those design choices arbitrarily. For this report, strategies were developed that deliberately reference the adjacent anatomy, implant and restorable path of draw, and bone density for implant and retentive design. For proof of concept, dentate arches from human cadavers were scanned using cone-beam computed tomography and then digitally modeled. Teeth of interest were virtually extracted and manipulated via computer-aided design to generate root-analog implants from zirconium. We created a stepwise protocol for analyzing and developing the implant sites, implant design and retention, and prosthetic emergence and connection all from the pre-op cone-beam data. Root-analog implants were placed at the time of extraction and examined radiographically and mechanically concerning ideal fit and stability. This study provides proof of concept that retentive root-analog implants can be produced from cone-beam data while improving fit, retention, safety, esthetics, and restorability when compared to the existing protocols. These advancements may provide the critical steps necessary for clinical relevance and success of immediately placed root-analog implants. Additional studies are necessary to validate the model prior to clinical trial.
Arduino, Paolo G; Tirone, Federico; Schiorlin, Emanuele; Esposito, Marco
2015-01-01
To evaluate the difference between a single preoperative dose versus an additional two-day postoperative course of oral amoxicillin in patients undergoing conventional dental implant placement. Two dentists in two different private practices conducted this study. One hour prior to surgery, patients had to take a single prophylactic antibiotic dose, consisting of 2 g of amoxicillin orally; after implant placement, patients were randomly allocated to two different groups: protocol A (no other antibiotic administration) and protocol B, (1 g of amoxicillin in the evening of the day of surgery and 1 g twice a day for the 2 days after). Outcome measures were prosthetic and implant failures, adverse events and early postoperative complications. Patients were followed up to 6 months after functional loading. Three hundred and sixty patients were randomised and treated (192 patients in one centre and 168 in the other). Five hundred and sixty-seven implants were placed. Protocol A was applied to 180 patients (278 implants) and protocol B also to 180 patients (289 implants). Data for 17 patients, 14 from protocol A and three from protocol B, were not available. No statistically significant differences were found for the reported outcomes. Two patients of protocol B experienced a prosthetic failure, losing four implants, while no prosthetic failures were reported for protocol A (P=0.4836; difference in proportions=-0.0110; 95% CI: -0.0412 to 0.0119). Five patients (3.0%) of protocol A lost five implants versus 5 patients (2.8%) who lost eight implants in protocol B (P=1.0000; difference in proportions=0.0020; 95% CI: -0.0384 to 0.0438). Three adverse events were observed in the total population, all occurring in protocol B (1.69%), with no statistically significant differences between the two groups (P=0.1199; difference in proportions=-0.0170; 95% CI: -0.0487 to 0.0059). However, one patient experienced a severe allergic reaction requiring therapy discontinuation and hospital admission. Early postoperative complications occurred in six patients of protocol A and in four patients of protocol B, with no statistically significant differences (P=0.5170; difference in proportions=0.0130; 95% CI: -0.0254 to 0.0568). No statistically significant differences were observed between 2 g of preoperative amoxicillin and an additional 2-day postoperative course, although adverse events were reported only in the additional 2-day postoperative group. Based on these findings, it might be sufficient to routinely administer preoperatively 2 g of amoxicillin to patients undergoing routine dental implant placement procedures rather than administering additional postoperative doses.
15 CFR 781.1 - Definitions of terms used in the Additional Protocol Regulations (APR).
Code of Federal Regulations, 2010 CFR
2010-01-01
... United States of America and the International Atomic Energy Agency for the Application of Safeguards in... Additional Protocol. Agreement State. Any State of the United States with which the U.S. Nuclear Regulatory Commission (NRC) has entered into an effective agreement under Subsection 274b of the Atomic Energy Act of...
15 CFR 781.1 - Definitions of terms used in the Additional Protocol Regulations (APR).
Code of Federal Regulations, 2011 CFR
2011-01-01
... United States of America and the International Atomic Energy Agency for the Application of Safeguards in... Additional Protocol. Agreement State. Any State of the United States with which the U.S. Nuclear Regulatory Commission (NRC) has entered into an effective agreement under Subsection 274b of the Atomic Energy Act of...
15 CFR 781.1 - Definitions of terms used in the Additional Protocol Regulations (APR).
Code of Federal Regulations, 2014 CFR
2014-01-01
... United States of America and the International Atomic Energy Agency for the Application of Safeguards in... Additional Protocol. Agreement State. Any State of the United States with which the U.S. Nuclear Regulatory Commission (NRC) has entered into an effective agreement under Subsection 274b of the Atomic Energy Act of...
15 CFR 781.1 - Definitions of terms used in the Additional Protocol Regulations (APR).
Code of Federal Regulations, 2012 CFR
2012-01-01
... United States of America and the International Atomic Energy Agency for the Application of Safeguards in... Additional Protocol. Agreement State. Any State of the United States with which the U.S. Nuclear Regulatory Commission (NRC) has entered into an effective agreement under Subsection 274b of the Atomic Energy Act of...
15 CFR 781.1 - Definitions of terms used in the Additional Protocol Regulations (APR).
Code of Federal Regulations, 2013 CFR
2013-01-01
... United States of America and the International Atomic Energy Agency for the Application of Safeguards in... Additional Protocol. Agreement State. Any State of the United States with which the U.S. Nuclear Regulatory Commission (NRC) has entered into an effective agreement under Subsection 274b of the Atomic Energy Act of...
Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horoshko, D B
2007-12-31
The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)
Ip, David; Fu, Nga Yue
2015-01-01
Background This study evaluated whether half-yearly hyaluronic acid injection together with low-level laser therapy in addition to standard conventional physical therapy can successfully postpone the need for joint replacement surgery in elderly patients with bilateral symptomatic tricompartmental knee arthritis. Methods In this prospective, double-blind, placebo-controlled study, 70 consecutive unselected elderly patients with bilateral tricompartmental knee arthritis were assigned at random to either one of two conservative treatment protocols to either one of the painful knees. Protocol A consisted of conventional physical therapy plus a sham light source plus saline injection, and protocol B consisted of protocol A with addition of half-yearly hyaluronic acid injection as well as low-level laser treatment instead of using saline and a sham light source. Treatment failure was defined as breakthrough pain necessitating joint replacement. Results Among the 140 painful knees treated with either protocol A or protocol B, only one of the 70 painful knees treated by protocol B required joint replacement, whereas 15 of the 70 painful knees treated by protocol A needed joint replacement surgery (P<0.05). Conclusion We conclude that half-yearly hyaluronic acid injections together with low-level laser therapy should be incorporated into the standard conservative treatment protocol for symptomatic knee arthritis, because it may prolong the longevity of the knee joint without the need for joint replacement. PMID:26346122
ERIC Educational Resources Information Center
Laben, Joyce
2012-01-01
With the implementation of RTI, educators are attempting to find models that are the best fit for their schools. The problem solving and standard protocol models are the two most common. This study of 65 students examines a new model, the dynamic skills protocol implemented in an elementary school starting in their fourth quarter of kindergarten…
CT protocol management: simplifying the process by using a master protocol concept
Bour, Robert K.; Rubert, Nicholas; Wendt, Gary; Pozniak, Myron; Ranallo, Frank N.
2015-01-01
This article explains a method for creating CT protocols for a wide range of patient body sizes and clinical indications, using detailed tube current information from a small set of commonly used protocols. Analytical expressions were created relating CT technical acquisition parameters which can be used to create new CT protocols on a given scanner or customize protocols from one scanner to another. Plots of mA as a function of patient size for specific anatomical regions were generated and used to identify the tube output needs for patients as a function of size for a single master protocol. Tube output data were obtained from the DICOM header of clinical images from our PACS and patient size was measured from CT localizer radiographs under IRB approval. This master protocol was then used to create 11 additional master protocols. The 12 master protocols were further combined to create 39 single and multiphase clinical protocols. Radiologist acceptance rate of exams scanned using the clinical protocols was monitored for 12,857 patients to analyze the effectiveness of the presented protocol management methods using a two‐tailed Fisher's exact test. A single routine adult abdominal protocol was used as the master protocol to create 11 additional master abdominal protocols of varying dose and beam energy. Situations in which the maximum tube current would have been exceeded are presented, and the trade‐offs between increasing the effective tube output via 1) decreasing pitch, 2) increasing the scan time, or 3) increasing the kV are discussed. Out of 12 master protocols customized across three different scanners, only one had a statistically significant acceptance rate that differed from the scanner it was customized from. The difference, however, was only 1% and was judged to be negligible. All other master protocols differed in acceptance rate insignificantly between scanners. The methodology described in this paper allows a small set of master protocols to be adapted among different clinical indications on a single scanner and among different CT scanners. PACS number: 87.57.Q PMID:26219005
Chang, Ling-Hui; Tsai, Athena Yi-Jung; Huang, Wen-Ni
2016-01-01
Because resources for long-term care services are limited, timely and appropriate referral for rehabilitation services is critical for optimizing clients’ functions and successfully integrating them into the community. We investigated which client characteristics are most relevant in predicting Taiwan’s community-based occupational therapy (OT) service referral based on experts’ beliefs. Data were collected in face-to-face interviews using the Multidimensional Assessment Instrument (MDAI). Community-dwelling participants (n = 221) ≥ 18 years old who reported disabilities in the previous National Survey of Long-term Care Needs in Taiwan were enrolled. The standard for referral was the judgment and agreement of two experienced occupational therapists who reviewed the results of the MDAI. Logistic regressions and Generalized Additive Models were used for analysis. Two predictive models were proposed, one using basic activities of daily living (BADLs) and one using instrumental ADLs (IADLs). Dementia, psychiatric disorders, cognitive impairment, joint range-of-motion limitations, fear of falling, behavioral or emotional problems, expressive deficits (in the BADL-based model), and limitations in IADLs or BADLs were significantly correlated with the need for referral. Both models showed high area under the curve (AUC) values on receiver operating curve testing (AUC = 0.977 and 0.972, respectively). The probability of being referred for community OT services was calculated using the referral algorithm. The referral protocol facilitated communication between healthcare professionals to make appropriate decisions for OT referrals. The methods and findings should be useful for developing referral protocols for other long-term care services. PMID:26863544
Mao, Hui-Fen; Chang, Ling-Hui; Tsai, Athena Yi-Jung; Huang, Wen-Ni; Wang, Jye
2016-01-01
Because resources for long-term care services are limited, timely and appropriate referral for rehabilitation services is critical for optimizing clients' functions and successfully integrating them into the community. We investigated which client characteristics are most relevant in predicting Taiwan's community-based occupational therapy (OT) service referral based on experts' beliefs. Data were collected in face-to-face interviews using the Multidimensional Assessment Instrument (MDAI). Community-dwelling participants (n = 221) ≥ 18 years old who reported disabilities in the previous National Survey of Long-term Care Needs in Taiwan were enrolled. The standard for referral was the judgment and agreement of two experienced occupational therapists who reviewed the results of the MDAI. Logistic regressions and Generalized Additive Models were used for analysis. Two predictive models were proposed, one using basic activities of daily living (BADLs) and one using instrumental ADLs (IADLs). Dementia, psychiatric disorders, cognitive impairment, joint range-of-motion limitations, fear of falling, behavioral or emotional problems, expressive deficits (in the BADL-based model), and limitations in IADLs or BADLs were significantly correlated with the need for referral. Both models showed high area under the curve (AUC) values on receiver operating curve testing (AUC = 0.977 and 0.972, respectively). The probability of being referred for community OT services was calculated using the referral algorithm. The referral protocol facilitated communication between healthcare professionals to make appropriate decisions for OT referrals. The methods and findings should be useful for developing referral protocols for other long-term care services.
Individualized music for dementia: Evolution and application of evidence-based protocol.
Gerdner, Linda A
2012-04-22
The theory-based intervention of individualized music has been evaluated clinically and empirically leading to advancement and refinement of an evidence-based protocol, currently in its 5th edition. An expanded version of the protocol was written for professional health care providers with a consumer version tailored for family caregivers. The underlying mid-range theory is presented along with a seminal study that was followed by further research in the United States, Canada, Great Britain, France, Sweden, Norway, Japan and Taiwan. Key studies are summarized. Given its efficacy when implemented by research staff, studies have advanced to testing the intervention under real-life conditions when implemented and evaluated by trained nursing assistants in long-term care facilities and visiting family members. In addition, one study evaluated the implementation of music by family members in the home setting. Initial research focused on agitation as the dependent variable with subsequent research indicating a more holistic response such as positive affect, expressed satisfaction, and meaningful interaction with others. The article advances by describing on-line programs designed to train health care professionals in the assessment, implementation and evaluation of individualized music. In addition, Gerdner has written a story for a picture book intended for children and their families (in press). The story models principles of individualized music to elicit positive memories, reduce anxiety and agitation, and promote communication. The article concludes with implications for future research.
Open source software to control Bioflo bioreactors.
Burdge, David A; Libourel, Igor G L
2014-01-01
Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW.
Open Source Software to Control Bioflo Bioreactors
Burdge, David A.; Libourel, Igor G. L.
2014-01-01
Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW. PMID:24667828
Individualized music for dementia: Evolution and application of evidence-based protocol
Gerdner, Linda A
2012-01-01
The theory-based intervention of individualized music has been evaluated clinically and empirically leading to advancement and refinement of an evidence-based protocol, currently in its 5th edition. An expanded version of the protocol was written for professional health care providers with a consumer version tailored for family caregivers. The underlying mid-range theory is presented along with a seminal study that was followed by further research in the United States, Canada, Great Britain, France, Sweden, Norway, Japan and Taiwan. Key studies are summarized. Given its efficacy when implemented by research staff, studies have advanced to testing the intervention under real-life conditions when implemented and evaluated by trained nursing assistants in long-term care facilities and visiting family members. In addition, one study evaluated the implementation of music by family members in the home setting. Initial research focused on agitation as the dependent variable with subsequent research indicating a more holistic response such as positive affect, expressed satisfaction, and meaningful interaction with others. The article advances by describing on-line programs designed to train health care professionals in the assessment, implementation and evaluation of individualized music. In addition, Gerdner has written a story for a picture book intended for children and their families (in press). The story models principles of individualized music to elicit positive memories, reduce anxiety and agitation, and promote communication. The article concludes with implications for future research. PMID:24175165
Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model
Nam, Junghyun; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks. PMID:24977229
Cryptography in the Bounded-Quantum-Storage Model
NASA Astrophysics Data System (ADS)
Schaffner, Christian
2007-09-01
This thesis initiates the study of cryptographic protocols in the bounded-quantum-storage model. On the practical side, simple protocols for Rabin Oblivious Transfer, 1-2 Oblivious Transfer and Bit Commitment are presented. No quantum memory is required for honest players, whereas the protocols can only be broken by an adversary controlling a large amount of quantum memory. The protocols are efficient, non-interactive and can be implemented with today's technology. On the theoretical side, new entropic uncertainty relations involving min-entropy are established and used to prove the security of protocols according to new strong security definitions. For instance, in the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers, the uncertainty relation allows to prove the security of QKD protocols while tolerating considerably higher error rates compared to the standard model with unbounded adversaries.
NASA Astrophysics Data System (ADS)
Zhu, Yenan; Hsieh, Yee-Hsee; Dhingra, Rishi R.; Dick, Thomas E.; Jacono, Frank J.; Galán, Roberto F.
2013-02-01
Interactions between oscillators can be investigated with standard tools of time series analysis. However, these methods are insensitive to the directionality of the coupling, i.e., the asymmetry of the interactions. An elegant alternative was proposed by Rosenblum and collaborators [M. G. Rosenblum, L. Cimponeriu, A. Bezerianos, A. Patzak, and R. Mrowka, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.65.041909 65, 041909 (2002); M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.64.045202 64, 045202 (2001)] which consists in fitting the empirical phases to a generic model of two weakly coupled phase oscillators. This allows one to obtain the interaction functions defining the coupling and its directionality. A limitation of this approach is that a solution always exists in the least-squares sense, even in the absence of coupling. To preclude spurious results, we propose a three-step protocol: (1) Determine if a statistical dependency exists in the data by evaluating the mutual information of the phases; (2) if so, compute the interaction functions of the oscillators; and (3) validate the empirical oscillator model by comparing the joint probability of the phases obtained from simulating the model with that of the empirical phases. We apply this protocol to a model of two coupled Stuart-Landau oscillators and show that it reliably detects genuine coupling. We also apply this protocol to investigate cardiorespiratory coupling in anesthetized rats. We observe reciprocal coupling between respiration and heartbeat and that the influence of respiration on the heartbeat is generally much stronger than vice versa. In addition, we find that the vagus nerve mediates coupling in both directions.
An economic and feasible Quantum Sealed-bid Auction protocol
NASA Astrophysics Data System (ADS)
Zhang, Rui; Shi, Run-hua; Qin, Jia-qi; Peng, Zhen-wan
2018-02-01
We present an economic and feasible Quantum Sealed-bid Auction protocol using quantum secure direct communication based on single photons in both the polarization and the spatial-mode degrees of freedom, where each single photon can carry two bits of classical information. Compared with previous protocols, our protocol has higher efficiency. In addition, we propose a secure post-confirmation mechanism without quantum entanglement to guarantee the security and the fairness of the auction.
NASA Astrophysics Data System (ADS)
Zhang, Yichen; Li, Zhengyu; Zhao, Yijia; Yu, Song; Guo, Hong
2017-02-01
We analyze the security of the two-way continuous-variable quantum key distribution protocol in reverse reconciliation against general two-mode attacks, which represent all accessible attacks at fixed channel parameters. Rather than against one specific attack model, the expression of secret key rates of the two-way protocol are derived against all accessible attack models. It is found that there is an optimal two-mode attack to minimize the performance of the protocol in terms of both secret key rates and maximal transmission distances. We identify the optimal two-mode attack, give the specific attack model of the optimal two-mode attack and show the performance of the two-way protocol against the optimal two-mode attack. Even under the optimal two-mode attack, the performances of two-way protocol are still better than the corresponding one-way protocol, which shows the advantage of making double use of the quantum channel and the potential of long-distance secure communication using a two-way protocol.
[Change of care model in natural childbirth: Implementation in La Ribera delivery room].
Camacho-Morell, F; Romero-Martín, M J
To assess knowledge, wish for inclusion and implementation of normal childbirth care protocols at La Ribera University Hospital, the reason why they are not applied, and to assess the attendance at antepartum training activities. Cross-sectional descriptive study. They were carried out 186 surveys by convenience sampling to pregnant women attending fetal well-being control at hospital between 2014 and 2015. They were collected data about knowledge, wish for inclusion, compliance of protocols and reasons for non-compliance, and attendance at antepartum training activities. Percentages and confidence intervals were calculated. Chi-square test was used to compare categorical variables. They were collected percentages of knowledge (77%, CI95%: 75,5-78,5) and wish for inclusion (84,6%, CI 95% : 82,5-86,7). Protocol compliance ranged from 6% (nitrous oxide administration) to 91% (skin-to-skin contact). The main reasons for non-compliance were due to circumstances of childbirth process (56,3%, CI 95% : 51,1-61,5). Attendance at maternal education classes was 62%, mainly primiparous women (p=0,0001) with medium or high education level (p=0,001). Pregnant women have a high knowledge and wish for inclusion of normal childbirth care protocols. Attendance at antepartum training activities could by improved and the main reason for non-attendance is lack of information. Compliance is good enough in most protocols; when they are not applied is due to childbirth circumstances. Remaining tasks include the introduction of additional protocols and to involve pregnant women in decision-making. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Kraal, Jos J; Sartor, Francesco; Papini, Gabriele; Stut, Wim; Peek, Niels; Kemps, Hareld Mc; Bonomi, Alberto G
2016-11-01
Accurate assessment of energy expenditure provides an opportunity to monitor physical activity during cardiac rehabilitation. However, the available assessment methods, based on the combination of heart rate (HR) and body movement data, are not applicable for patients using beta-blocker medication. Therefore, we developed an energy expenditure prediction model for beta-blocker-medicated cardiac rehabilitation patients. Sixteen male cardiac rehabilitation patients (age: 55.8 ± 7.3 years, weight: 93.1 ± 11.8 kg) underwent a physical activity protocol with 11 low- to moderate-intensity common daily life activities. Energy expenditure was assessed using a portable indirect calorimeter. HR and body movement data were recorded during the protocol using unobtrusive wearable devices. In addition, patients underwent a symptom-limited exercise test and resting metabolic rate assessment. Energy expenditure estimation models were developed using multivariate regression analyses based on HR and body movement data and/or patient characteristics. In addition, a HR-flex model was developed. The model combining HR and body movement data and patient characteristics showed the highest correlation and lowest error (r 2 = 0.84, root mean squared error = 0.834 kcal/minute) with total energy expenditure. The method based on individual calibration data (HR-flex) showed lower accuracy (i 2 = 0.83, root mean squared error = 0.992 kcal/minute). Our results show that combining HR and body movement data improves the accuracy of energy expenditure prediction models in cardiac patients, similar to methods that have been developed for healthy subjects. The proposed methodology does not require individual calibration and is based on the data that are available in clinical practice. © The European Society of Cardiology 2016.
Satellite Communications Using Commercial Protocols
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Griner, James H.; Dimond, Robert; Frantz, Brian D.; Kachmar, Brian; Shell, Dan
2000-01-01
NASA Glenn Research Center has been working with industry, academia, and other government agencies in assessing commercial communications protocols for satellite and space-based applications. In addition, NASA Glenn has been developing and advocating new satellite-friendly modifications to existing communications protocol standards. This paper summarizes recent research into the applicability of various commercial standard protocols for use over satellite and space- based communications networks as well as expectations for future protocol development. It serves as a reference point from which the detailed work can be readily accessed. Areas that will be addressed include asynchronous-transfer-mode quality of service; completed and ongoing work of the Internet Engineering Task Force; data-link-layer protocol development for unidirectional link routing; and protocols for aeronautical applications, including mobile Internet protocol routing for wireless/mobile hosts and the aeronautical telecommunications network protocol.
Engineering a humanized bone organ model in mice to study bone metastases.
Martine, Laure C; Holzapfel, Boris M; McGovern, Jacqui A; Wagner, Ferdinand; Quent, Verena M; Hesami, Parisa; Wunner, Felix M; Vaquette, Cedryck; De-Juan-Pardo, Elena M; Brown, Toby D; Nowlan, Bianca; Wu, Dan Jing; Hutmacher, Cosmo Orlando; Moi, Davide; Oussenko, Tatiana; Piccinini, Elia; Zandstra, Peter W; Mazzieri, Roberta; Lévesque, Jean-Pierre; Dalton, Paul D; Taubenberger, Anna V; Hutmacher, Dietmar W
2017-04-01
Current in vivo models for investigating human primary bone tumors and cancer metastasis to the bone rely on the injection of human cancer cells into the mouse skeleton. This approach does not mimic species-specific mechanisms occurring in human diseases and may preclude successful clinical translation. We have developed a protocol to engineer humanized bone within immunodeficient hosts, which can be adapted to study the interactions between human cancer cells and a humanized bone microenvironment in vivo. A researcher trained in the principles of tissue engineering will be able to execute the protocol and yield study results within 4-6 months. Additive biomanufactured scaffolds seeded and cultured with human bone-forming cells are implanted ectopically in combination with osteogenic factors into mice to generate a physiological bone 'organ', which is partially humanized. The model comprises human bone cells and secreted extracellular matrix (ECM); however, other components of the engineered tissue, such as the vasculature, are of murine origin. The model can be further humanized through the engraftment of human hematopoietic stem cells (HSCs) that can lead to human hematopoiesis within the murine host. The humanized organ bone model has been well characterized and validated and allows dissection of some of the mechanisms of the bone metastatic processes in prostate and breast cancer.
Standardization of deep partial-thickness scald burns in C57BL/6 mice
Medina, Jorge L; Fourcaudot, Andrea B; Sebastian, Eliza A; Shankar, Ravi; Brown, Ammon W; Leung, Kai P
2018-01-01
Mouse burn models are used to understand the wound healing process and having a reproducible model is important. The different protocols used by researchers can lead to differences in depth of partial-thickness burn wounds. Additionally, standardizing a protocol for mouse burns in the laboratory for one strain may result in substantially different results in other strains. In our current study we describe the model development of a deep partial-thickness burn in C57BL/6 mice using hot water scalding as the source of thermal injury. As part of our model development we designed a template with specifications to allow for even contact of bare mouse skin (2×3 cm) with hot water while protecting the rest of the mouse. Burn depth was evaluated with H&E, Masson’s trichrome, and TUNEL staining. Final results were validated with pathology analysis. A water temperature of 54°C with a scalding time of 20 seconds produced consistent deep partial-thickness burns with available equipment described. Other than temperature and time, factors such as template materials and cooling steps after the burn could affect the uniformity of the burns. These findings are useful to burn research by providing some key parameters essential for researchers to simplify the development of their own mouse burn models. PMID:29755839
Quantum-key-distribution protocol with pseudorandom bases
NASA Astrophysics Data System (ADS)
Trushechkin, A. S.; Tregubov, P. A.; Kiktenko, E. O.; Kurochkin, Y. V.; Fedorov, A. K.
2018-01-01
Quantum key distribution (QKD) offers a way for establishing information-theoretical secure communications. An important part of QKD technology is a high-quality random number generator for the quantum-state preparation and for post-processing procedures. In this work, we consider a class of prepare-and-measure QKD protocols, utilizing additional pseudorandomness in the preparation of quantum states. We study one of such protocols and analyze its security against the intercept-resend attack. We demonstrate that, for single-photon sources, the considered protocol gives better secret key rates than the BB84 and the asymmetric BB84 protocols. However, the protocol strongly requires single-photon sources.
Jamaludin, Ummu K; M Suhaimi, Fatanah; Abdul Razak, Normy Norfiza; Md Ralib, Azrina; Mat Nor, Mohd Basri; Pretty, Christopher G; Humaidi, Luqman
2018-08-01
Blood glucose variability is common in healthcare and it is not related or influenced by diabetes mellitus. To minimise the risk of high blood glucose in critically ill patients, Stochastic Targeted Blood Glucose Control Protocol is used in intensive care unit at hospitals worldwide. Thus, this study focuses on the performance of stochastic modelling protocol in comparison to the current blood glucose management protocols in the Malaysian intensive care unit. Also, this study is to assess the effectiveness of Stochastic Targeted Blood Glucose Control Protocol when it is applied to a cohort of diabetic patients. Retrospective data from 210 patients were obtained from a general hospital in Malaysia from May 2014 until June 2015, where 123 patients were having comorbid diabetes mellitus. The comparison of blood glucose control protocol performance between both protocol simulations was conducted through blood glucose fitted with physiological modelling on top of virtual trial simulations, mean calculation of simulation error and several graphical comparisons using stochastic modelling. Stochastic Targeted Blood Glucose Control Protocol reduces hyperglycaemia by 16% in diabetic and 9% in nondiabetic cohorts. The protocol helps to control blood glucose level in the targeted range of 4.0-10.0 mmol/L for 71.8% in diabetic and 82.7% in nondiabetic cohorts, besides minimising the treatment hour up to 71 h for 123 diabetic patients and 39 h for 87 nondiabetic patients. It is concluded that Stochastic Targeted Blood Glucose Control Protocol is good in reducing hyperglycaemia as compared to the current blood glucose management protocol in the Malaysian intensive care unit. Hence, the current Malaysian intensive care unit protocols need to be modified to enhance their performance, especially in the integration of insulin and nutrition intervention in decreasing the hyperglycaemia incidences. Improvement in Stochastic Targeted Blood Glucose Control Protocol in terms of u en model is also a must to adapt with the diabetic cohort. Copyright © 2018 Elsevier B.V. All rights reserved.
Complementary and Alternative Medicine Use Among US Navy and Marine Corps Personnel
2007-05-16
subjects in research (Protocol NHRC.2001.0001). Postal survey The choice of questions and question layouts for the sur- vey instrument were modeled...health problems within the past 12 months. The optically scanned 10-page survey instrument was designed to take approximately 30 minutes to com- plete...the survey instrument was refined before the initial mailing. Additionally, a random sample of 33% of individuals who completed the initial
Secure multi-party quantum summation based on quantum Fourier transform
NASA Astrophysics Data System (ADS)
Yang, Hui-Yi; Ye, Tian-Yu
2018-06-01
In this paper, we propose a novel secure multi-party quantum summation protocol based on quantum Fourier transform, where the traveling particles are transmitted in a tree-type mode. The party who prepares the initial quantum states is assumed to be semi-honest, which means that she may misbehave on her own but will not conspire with anyone. The proposed protocol can resist both the outside attacks and the participant attacks. Especially, one party cannot obtain other parties' private integer strings; and it is secure for the colluding attack performed by at most n - 2 parties, where n is the number of parties. In addition, the proposed protocol calculates the addition of modulo d and implements the calculation of addition in a secret-by-secret way rather than a bit-by-bit way.
Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)
Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.
2015-01-01
These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.
Logan, Ryan W.; McCulley, Walter D.; Seggio, Joseph A.; Rosenwasser, Alan M.
2011-01-01
Background Alcohol withdrawal is associated with behavioral and chronobiological disturbances that may persist during protracted abstinence. We previously reported that C57BL/6J (B6) mice show marked but temporary reductions in running-wheel activity, and normal free-running circadian rhythms, following a 4-day chronic intermittent ethanol vapor (CIE) exposure (16 hours of ethanol vapor exposure alternating with 8 hours of withdrawal). In the present experiments, we extend these observations in two ways: (1) by examining post-CIE locomotor activity in C3H/HeJ (C3H) mice, an inbred strain characterized by high sensitivity to ethanol withdrawal, and (2) by directly comparing the responses of B6 and C3H mice to a longer-duration CIE protocol. Methods In Experiment 1, C3H mice were exposed to the same 4-day CIE protocol used in our previous study with B6 mice (referred to here as the 1-cycle CIE protocol). In Experiment 2, C3H and B6 mice were exposed to three successive 4-day CIE cycles, each separated by 2 days of withdrawal (the 3-cycle CIE protocol). Running-wheel activity was monitored prior to and following CIE, and post-CIE activity was recorded in constant darkness to allow assessment of free-running circadian period and phase. Results C3H mice displayed pronounced reductions in running-wheel activity that persisted for the duration of the recording period (up to 30 days) following both 1-cycle (Experiment 1) and 3-cycle (Experiment 2) CIE protocols. In contrast, B6 mice showed reductions in locomotor activity that persisted for about one week following the 3-cycle CIE protocol, similar to the results of our previous study using a 1-cycle protocol in this strain. Additionally, C3H mice showed significant shortening of free-running period following the 3-cycle, but not the 1-cycle, CIE protocol, while B6 mice showed normal free-running rhythms. Conclusions These results reveal genetic differences in the persistence of ethanol withdrawal-induced hypo-locomotion. In addition, chronobiological alterations during extended abstinence may depend on both genetic susceptibility and an extended prior withdrawal history. The present data establish a novel experimental model for long-term behavioral and circadian disruptions associated with ethanol withdrawal. PMID:22013893
NASA Astrophysics Data System (ADS)
Gallagher, Kerry
2016-05-01
Flowers et al. (2015) propose a framework for reporting modeling results for thermochronological data problems, particularly when using inversion approaches. In the final paragraph, they state 'we hope that the suggested reporting table template will stimulate additional community discussion about modeling philosophies and reporting formats'. In this spirit the purpose of this comment is to suggest that they have underplayed the importance of presenting a comparison of the model predictions with the observations. An inversion-based modeling approach aims to identify those models which makes predictions consistent, perhaps to varying degrees, with the observed data. The concluding section includes the phrase 'clear documentation of the model inputs and outputs', but their example from the Grand Canyon shows only the observed data.
Further developments in generating type-safe messaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, R.; King, C.; /Fermilab
2011-11-01
At ICALEPCS 09, we introduced a source code generator that allows processes to communicate safely using data types native to each host language. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including the addition of three more client languages, an optimization in network packet size and the addition of a new protocol data type. The protocol compiler is continuing to prove itself as an easy and robust way to get applications written in different languages hosted on different computer architectures to communicate. We have two active Erlang projects that are using themore » protocol compiler to access ACNET data at high data rates. We also used the protocol compiler output to deliver ACNET data to an iPhone/iPad application. Since it takes an average of two weeks to support a new language, we're willing to expand the protocol compiler to support new languages that our community uses.« less
Experimental eavesdropping attack against Ekert's protocol based on Wigner's inequality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovino, F. A.; Colla, A. M.; Castagnoli, G.
2003-09-01
We experimentally implemented an eavesdropping attack against the Ekert protocol for quantum key distribution based on the Wigner inequality. We demonstrate a serious lack of security of this protocol when the eavesdropper gains total control of the source. In addition we tested a modified Wigner inequality which should guarantee a secure quantum key distribution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... period some quantity of consumption that the nation is permitted under the Montreal Protocol. (2) Trade... Party to the Protocol as set forth in this paragraph (b). A person may only receive consumption from... maximum consumption that the nation is allowed under the Protocol minus the quantity (in kilograms) traded...
Code of Federal Regulations, 2011 CFR
2011-07-01
... period some quantity of consumption that the nation is permitted under the Montreal Protocol. (2) Trade... Party to the Protocol as set forth in this paragraph (b). A person may only receive consumption from... maximum consumption that the nation is allowed under the Protocol minus the quantity (in kilograms) traded...
Code of Federal Regulations, 2013 CFR
2013-07-01
... period some quantity of consumption that the nation is permitted under the Montreal Protocol. (2) Trade... Party to the Protocol as set forth in this paragraph (b). A person may only receive consumption from... maximum consumption that the nation is allowed under the Protocol minus the quantity (in kilograms) traded...
Code of Federal Regulations, 2012 CFR
2012-07-01
... period some quantity of consumption that the nation is permitted under the Montreal Protocol. (2) Trade... Party to the Protocol as set forth in this paragraph (b). A person may only receive consumption from... maximum consumption that the nation is allowed under the Protocol minus the quantity (in kilograms) traded...
Code of Federal Regulations, 2014 CFR
2014-07-01
... period some quantity of consumption that the nation is permitted under the Montreal Protocol. (2) Trade... Party to the Protocol as set forth in this paragraph (b). A person may only receive consumption from... maximum consumption that the nation is allowed under the Protocol minus the quantity (in kilograms) traded...
Castro, Natália A; Pfeifer, Luiz F M; Andrade, Jéssica S; Rincón, Joao A A; Pegoraro, Ligia M Cantarelli; Schneider, Augusto
2018-01-01
Paraoxonase-1 (PON1) activity has been associated with improvement in ovarian function in early postpartum dairy cows and improved in vitro embryo development. The aim of the current study was to evaluate the potential association among PON1 activity and follicular growth, diameter of the preovulatory follicle and pregnancy per artificial insemination (AI) service in cattle. In Experiment 1, cows (n=33) were subjected to an estradiol-progesterone based protocol to control time of ovulation. Starting on Day 8 of the protocol, follicular growth and serum PON1 activity were monitored. Cows were separated according to the occurrence of ovulation into two groups: Ovulatory (Ov; n=22) and Anovulatory (Anov; n=11). The serum activity of PON1 was not different between Ov and Anov cows (P=0.94). In addition, using a regression model there was no effect of serum PON1 activity on the diameter of dominant follicle (r 2 =0.00; P=0.99). In Experiment 2, cows (n=193) were submitted to the same hormonal protocol as in Experiment 1. On the day of the timed artificial insemination (TAI), the diameter of dominant follicle was evaluated and blood samples were collected for analysis of PON1 activity. According to the serum PON1 activity, cows were divided into three groups: Low (<70U/mL), Medium (70-90U/mL) or High (>90U/mL) PON1 activity. The overall pregnancy rate was 62.7% (121/193), with no difference among PON1 activity groups. Additionally, using a regression model there was no effect of serum PON1 activity on the diameter of the preovulatory follicle (r 2 =0.03; P=0.65) and pregnancy rate (r 2 =0.005; P=0.94). The results of this study indicate that there is no effect of serum PON1 activity on the diameter of preovulatory follicle or establishment of pregnancy in cows submitted to time of ovulation synchronization protocols. Copyright © 2017 Elsevier B.V. All rights reserved.
Verification and validation of a reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Detection of Early lung Cancer Among Military Personnel (DECAMP)
2017-10-01
addition of two new recruitment sites. We have recruited ~75% of the 500 total subjects in the indeterminate pulmonary nodule study (Protocol 1), and...60% of the 800 total subjects in the longitudinal screening study (Protocol 2). We have also added a junior faculty pulmonary physician and scientific...supplement infrastructure support within DECAMP and pursue additional biomarker studies . 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17
Research in DRM architecture based on watermarking and PKI
NASA Astrophysics Data System (ADS)
Liu, Ligang; Chen, Xiaosu; Xiao, Dao-ju; Yi, Miao
2005-02-01
Analyze the virtue and disadvantage of the present digital copyright protecting system, design a kind of security protocol model of digital copyright protection, which equilibrium consider the digital media"s use validity, integrality, security of transmission, and trade equity, make a detailed formalize description to the protocol model, analyze the relationship of the entities involved in the digital work copyright protection. The analysis of the security and capability of the protocol model shows that the model is good at security and practicability.
Van Calster, B; Bobdiwala, S; Guha, S; Van Hoorde, K; Al-Memar, M; Harvey, R; Farren, J; Kirk, E; Condous, G; Sur, S; Stalder, C; Timmerman, D; Bourne, T
2016-11-01
A uniform rationalized management protocol for pregnancies of unknown location (PUL) is lacking. We developed a two-step triage protocol to select PUL at high risk of ectopic pregnancy (EP), based on serum progesterone level at presentation (step 1) and the serum human chorionic gonadotropin (hCG) ratio, defined as the ratio of hCG at 48 h to hCG at presentation (step 2). This was a cohort study of 2753 PUL (301 EP), involving a secondary analysis of prospectively and consecutively collected PUL data from two London-based university teaching hospitals. Using a chronological split we used 1449 PUL for development and 1304 for validation. We aimed to assign PUL as low risk with high confidence (high negative predictive value (NPV)) while classifying most EP as high risk (high sensitivity). The first triage step assigned PUL as low risk using a threshold of serum progesterone at presentation. The remaining PUL were triaged using a novel logistic regression risk model based on hCG ratio and initial serum progesterone (second step), defining low risk as an estimated EP risk of < 5%. On validation, initial serum progesterone ≤ 2 nmol/L (step 1) classified 16.1% PUL as low risk. Second-step classification with the risk model selected an additional 46.0% of all PUL as low risk. Overall, the two-step protocol classified 62.1% of PUL as low risk, with an NPV of 98.6% and a sensitivity of 92.0%. When the risk model was used in isolation (i.e. without the first step), 60.5% of PUL were classified as low risk with 99.1% NPV and 94.9% sensitivity. PUL can be classified efficiently into being either high or low risk for complications using a two-step protocol involving initial progesterone and hCG levels and the hCG ratio. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Solution conformation of carbohydrates: a view by using NMR assisted by modeling.
Díaz, Dolores; Canales-Mayordomo, Angeles; Cañada, F Javier; Jiménez-Barbero, Jesús
2015-01-01
Structural elucidation of complex carbohydrates in solution is not a trivial task. From the NMR view point, the limited chemical shift dispersion of sugar NMR spectra demands the combination of a variety of NMR techniques as well as the employment of molecular modeling methods. Herein, a general protocol for assignment of resonances and determination of inter-proton distances within the saccharides by homonuclear and heteronuclear experiments (i.e., (1)H and (13)C) is described. In addition, several computational tools and procedures for getting a final ensemble of geometries that represent the structure in solution are presented.
Protocol biopsies in renal transplantation: prognostic value of structural monitoring.
Serón, D; Moreso, F
2007-09-01
The natural history of renal allograft damage has been characterized in serial protocol biopsies. The prevalence of subclinical rejection (SCR) is maximal during the first months and it is associated with the progression of interstitial fibrosis/tubular atrophy (IF/TA) and a decreased graft survival. IF/TA rapidly progress during the first months and constitutes an independent predictor of graft survival. IF/TA associated with transplant vasculopathy, SCR, or transplant glomerulopathy implies a poorer prognosis than IF/TA without additional lesions. These observations suggest that protocol biopsies could be considered a surrogate of graft survival. Preliminary data suggest that the predictive value of protocol biopsies is not inferior to acute rejection or renal function. Additionally, protocol biopsies have been employed as a secondary efficacy variable in clinical trials. This strategy has been useful to demonstrate a decrease in the progression of IF/TA in some calcineurin-free regimens. Quantification of renal damage is associated with graft survival suggesting that quantitative parameters might improve the predictive value of protocol biopsies. Validation of protocol biopsies as a surrogate of graft survival is actively pursued, as the utility of classical surrogates of graft outcome such as acute rejection has become less useful because of its decreased prevalence with actual immunosuppression.
Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2014-01-01
Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.
Similarity-based modeling in large-scale prediction of drug-drug interactions.
Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P
2014-09-01
Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented.
Semi-Structured Interview Protocol for Constructing Logic Models
ERIC Educational Resources Information Center
Gugiu, P. Cristian; Rodriguez-Campos, Liliana
2007-01-01
This paper details a semi-structured interview protocol that evaluators can use to develop a logic model of a program's services and outcomes. The protocol presents a series of questions, which evaluators can ask of specific program informants, that are designed to: (1) identify key informants basic background and contextual information, (2)…
Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J
2018-01-01
The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.
A Constrained and Versioned Data Model for TEAM Data
NASA Astrophysics Data System (ADS)
Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.
2009-04-01
The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block, sampling unit and time range. The operation insertSamplingUnit(sampling unit, site, protocol) saves a new sampling unit into the data model and links it with the site and protocol. The operation updateSampligUnit(sampling_unit_id, attribute, value) changes the attribute (e.g. latitude or longitude) of the sampling unit to the specified value. The operation insertData(observation record, site, protocol, sampling unit, timestamps, data collectors) saves a new observation record into the database and associates it with specified objects. The operation updateData(protocol, data_id, attribute, value) modifies the attribute of an existing observation record to the specified value. All the insert or update operations require: 1) authorization to ensure the user has necessary privileges to perform the operation; 2) timestamp validation to ensure the observation timestamps are in the designated time range specified in the sampling schedule; 3) data validation to check that the data records use correct taxonomy terms and data values. No authorization is performed for get operations, but under some specific condition, a username may be required for the purpose of authentication. Along with the validations above, the TEAM data model also supports human based data validation on observed data through the Data Review subsystem to ensure data quality. The data review is implemented by adding two attributes review_tag and review_comment to each observation data record. The attribute review_tag is used by a reviewer to specify the quality of data, and the attribute review_comment is for reviewers to give more information when a problem is identified. The review_tag attribute can be populated by either the system conducting QA/QC tests or by pre-specified scientific experts. The following is the review operation, which is actually a special case of the operation updateData: The operation updateReview(protocol, data_id, judgment, comment) sets the attribute review_tag and review_comment to the specified values. By systematically tracking every step, The TEAM data model can roll back to any previous state. This is achieved by introducing a historical data container for each editable object type. When the operation updateData is applied to an object to modify its attribute, the object will be tagged with the current timestamp and the name of the user who conducts the operation, the tagged object will then be moved into the historical data container, and finally a new object will be created with the new value for the specified attribute. The diagram illustrates the architecture of the TEAM data management system. A data collector can use the Data Ingestion subsystem to load new data records into the TEAM data model. The system establishes a first level of review (i.e. meets minimum data standards via QA/QC tests). Further review is done via experts and they can verify and provide their comments on data records through the Data Review subsystem. The data editor can then address data records based on the reviewer's comments. Users can use the Data Query and Download application to find data by sites, protocols and time ranges. The Data Query and Download system packages selected data with the data license and important metadata information into a single package and delivers it to the user.
Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE
NASA Astrophysics Data System (ADS)
Lamare, F.; Turzo, A.; Bizais, Y.; Cheze LeRest, C.; Visvikis, D.
2006-02-01
A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image quality validation study revealed a good agreement in signal-to-noise ratio and contrast recovery coefficients for a number of different volume spheres and two different (clinical level based) tumour-to-background ratios. In conclusion, these results support the accurate modelling of the Philips Allegro/GEMINI PET systems using GATE in combination with a dead-time model for the signal flow description, which leads to an agreement of <10% in coincidence count rates under different imaging conditions and clinically relevant activity concentration levels.
On Robust Key Agreement Based on Public Key Authentication
NASA Astrophysics Data System (ADS)
Hao, Feng
We describe two new attacks on the HMQV protocol. The first attack raises a serious question on the basic definition of "authentication" in HMQV, while the second attack is generally applicable to many other protocols. In addition, we present a new authenticated key agreement protocol called YAK. Our approach is to depend on well-established techniques such as Schnorr's signature. Among all the related protocols, YAK appears to be the simplest so far. We believe simplicity is an important engineering principle.
NASA Astrophysics Data System (ADS)
Gao, Gan; Wang, Li-Ping
2010-11-01
We propose a quantum secret sharing protocol, in which Bell states in the high dimension Hilbert space are employed. The biggest advantage of our protocol is the high source capacity. Compared with the previous secret sharing protocol, ours has the higher controlling efficiency. In addition, as decoy states in the high dimension Hilbert space are used, we needn’t destroy quantum entanglement for achieving the goal to check the channel security.
A new quantum sealed-bid auction protocol with secret order in post-confirmation
NASA Astrophysics Data System (ADS)
Wang, Jing-Tao; Chen, Xiu-Bo; Xu, Gang; Meng, Xiang-Hua; Yang, Yi-Xian
2015-10-01
A new security protocol for quantum sealed-bid auction is proposed to resist the collusion attack from some malicious bidders. The most significant feature of this protocol is that bidders prepare their particles with secret order in post-confirmation for encoding bids. In addition, a new theorem and its proof are given based on the theory of combinatorial mathematics, which can be used as evaluation criteria for the collusion attack. It is shown that the new protocol is immune to the collusion attack and meets the demand for a secure auction. Compared with those previous protocols, the security, efficiency and availability of the proposed protocol are largely improved.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
75 FR 4323 - Additional Quantitative Fit-testing Protocols for the Respiratory Protection Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-27
... respirators (500 and 1000 for protocols 1 and 2, respectively). However, OSHA could not evaluate the results... the values of these descriptive statistics for revised PortaCount[supreg] QNFT protocols 1 (at RFFs of 100 and 500) and 2 (at RFFs of 200 and 1000). Table 2--Descriptive Statistics for RFFs of 100 and 200...
Code of Federal Regulations, 2012 CFR
2012-07-01
... listed in appendix C to this subpart (Parties to the Montreal Protocol) must agree either to transfer to... permitted under the Montreal Protocol or to receive from the person for the current control period some... by the United States to the Secretariat of the Montreal Protocol for an essential use exemption may...
Code of Federal Regulations, 2013 CFR
2013-07-01
... listed in appendix C to this subpart (Parties to the Montreal Protocol) must agree either to transfer to... permitted under the Montreal Protocol or to receive from the person for the current control period some... by the United States to the Secretariat of the Montreal Protocol for an essential use exemption may...
Code of Federal Regulations, 2011 CFR
2011-07-01
... quantity of production that the nation is permitted under the Montreal Protocol or to receive from the... Article 5 allowances, for a specified control period through trades with another Party to the Protocol as... this subpart that is also listed in Appendix C, Annex 1 of the Protocol as having ratified the Beijing...
Code of Federal Regulations, 2011 CFR
2011-07-01
... listed in appendix C to this subpart (Parties to the Montreal Protocol) must agree either to transfer to... permitted under the Montreal Protocol or to receive from the person for the current control period some... by the United States to the Secretariat of the Montreal Protocol for an essential use exemption may...
Code of Federal Regulations, 2012 CFR
2012-07-01
... quantity of production that the nation is permitted under the Montreal Protocol or to receive from the... Article 5 allowances, for a specified control period through trades with another Party to the Protocol as... this subpart that is also listed in Appendix C, Annex 1 of the Protocol as having ratified the Beijing...
Code of Federal Regulations, 2014 CFR
2014-07-01
... listed in appendix C to this subpart (Parties to the Montreal Protocol) must agree either to transfer to... permitted under the Montreal Protocol or to receive from the person for the current control period some... by the United States to the Secretariat of the Montreal Protocol for an essential use exemption may...
Code of Federal Regulations, 2010 CFR
2010-07-01
... this subpart (Parties to the Montreal Protocol) must agree to transfer to the person for the current control period some amount of production that the nation is permitted under the Montreal Protocol. If the... Protocol for class I, Group I through Group V and Group VII controlled substances until January 1, 1996 and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... this subpart (Parties to the Montreal Protocol) must agree to transfer to the person for the current control period some amount of production that the nation is permitted under the Montreal Protocol. If the... Protocol for class I, Group I through Group V and Group VII controlled substances until January 1, 1996 and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... quantity of production that the nation is permitted under the Montreal Protocol or to receive from the... Article 5 allowances, for a specified control period through trades with another Party to the Protocol as... this subpart that is also listed in Appendix C, Annex 1 of the Protocol as having ratified the Beijing...
Code of Federal Regulations, 2014 CFR
2014-07-01
... this subpart (Parties to the Montreal Protocol) must agree to transfer to the person for the current control period some amount of production that the nation is permitted under the Montreal Protocol. If the... Protocol for class I, Group I through Group V and Group VII controlled substances until January 1, 1996 and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... this subpart (Parties to the Montreal Protocol) must agree to transfer to the person for the current control period some amount of production that the nation is permitted under the Montreal Protocol. If the... Protocol for class I, Group I through Group V and Group VII controlled substances until January 1, 1996 and...
Code of Federal Regulations, 2010 CFR
2010-07-01
... quantity of production that the nation is permitted under the Montreal Protocol or to receive from the... Article 5 allowances, for a specified control period through trades with another Party to the Protocol as... this subpart that is also listed in Appendix C, Annex 1 of the Protocol as having ratified the Beijing...
Code of Federal Regulations, 2012 CFR
2012-07-01
... this subpart (Parties to the Montreal Protocol) must agree to transfer to the person for the current control period some amount of production that the nation is permitted under the Montreal Protocol. If the... Protocol for class I, Group I through Group V and Group VII controlled substances until January 1, 1996 and...
Simple protocols for oblivious transfer and secure identification in the noisy-quantum-storage model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaffner, Christian
2010-09-15
We present simple protocols for oblivious transfer and password-based identification which are secure against general attacks in the noisy-quantum-storage model as defined in R. Koenig, S. Wehner, and J. Wullschleger [e-print arXiv:0906.1030]. We argue that a technical tool from Koenig et al. suffices to prove security of the known protocols. Whereas the more involved protocol for oblivious transfer from Koenig et al. requires less noise in storage to achieve security, our ''canonical'' protocols have the advantage of being simpler to implement and the security error is easier control. Therefore, our protocols yield higher OT rates for many realistic noise parameters.more » Furthermore, a proof of security of a direct protocol for password-based identification against general noisy-quantum-storage attacks is given.« less
A Scenario-Based Protocol Checker for Public-Key Authentication Scheme
NASA Astrophysics Data System (ADS)
Saito, Takamichi
Security protocol provides communication security for the internet. One of the important features of it is authentication with key exchange. Its correctness is a requirement of the whole of the communication security. In this paper, we introduce three attack models realized as their attack scenarios, and provide an authentication-protocol checker for applying three attack-scenarios based on the models. We also utilize it to check two popular security protocols: Secure SHell (SSH) and Secure Socket Layer/Transport Layer Security (SSL/TLS).
Pfeiffenberger, Erik; Chaleil, Raphael A.G.; Moal, Iain H.
2017-01-01
ABSTRACT Reliable identification of near‐native poses of docked protein–protein complexes is still an unsolved problem. The intrinsic heterogeneity of protein–protein interactions is challenging for traditional biophysical or knowledge based potentials and the identification of many false positive binding sites is not unusual. Often, ranking protocols are based on initial clustering of docked poses followed by the application of an energy function to rank each cluster according to its lowest energy member. Here, we present an approach of cluster ranking based not only on one molecular descriptor (e.g., an energy function) but also employing a large number of descriptors that are integrated in a machine learning model, whereby, an extremely randomized tree classifier based on 109 molecular descriptors is trained. The protocol is based on first locally enriching clusters with additional poses, the clusters are then characterized using features describing the distribution of molecular descriptors within the cluster, which are combined into a pairwise cluster comparison model to discriminate near‐native from incorrect clusters. The results show that our approach is able to identify clusters containing near‐native protein–protein complexes. In addition, we present an analysis of the descriptors with respect to their power to discriminate near native from incorrect clusters and how data transformations and recursive feature elimination can improve the ranking performance. Proteins 2017; 85:528–543. © 2016 Wiley Periodicals, Inc. PMID:27935158
On extremals of the entropy production by ‘Langevin-Kramers’ dynamics
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2014-05-01
We refer as ‘Langevin-Kramers’ dynamics to a class of stochastic differential systems exhibiting a degenerate ‘metriplectic’ structure. This means that the drift field can be decomposed into a symplectic and a gradient-like component with respect to a pseudo-metric tensor associated with random fluctuations affecting increments of only a sub-set of the degrees of freedom. Systems in this class are often encountered in applications as elementary models of Hamiltonian dynamics in a heat bath eventually relaxing to a Boltzmann steady state. Entropy production control in Langevin-Kramers models differs from the now well-understood case of Langevin-Smoluchowski dynamics for two reasons. First, the definition of entropy production stemming from fluctuation theorems specifies a cost functional which does not act coercively on all degrees of freedom of control protocols. Second, the presence of a symplectic structure imposes a non-local constraint on the class of admissible controls. Using Pontryagin control theory and restricting the attention to additive noise, we show that smooth protocols attaining extremal values of the entropy production appear generically in continuous parametric families as a consequence of a trade-off between smoothness of the admissible protocols and non-coercivity of the cost functional. Uniqueness is, however, always recovered in the over-damped limit as extremal equations reduce at leading order to the Monge-Ampère-Kantorovich optimal mass-transport equations.
The ODD protocol: A review and first update
Grimm, Volker; Berger, Uta; DeAngelis, Donald L.; Polhill, J. Gary; Giske, Jarl; Railsback, Steve F.
2010-01-01
The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.
Kallet, Richard H; Zhuo, Hanjing; Yip, Vivian; Gomez, Antonio; Lipnick, Michael S
2018-01-01
Spontaneous breathing trials (SBTs) and daily sedation interruptions (DSIs) reduce both the duration of mechanical ventilation and ICU length of stay (LOS). The impact of these practices in patients with ARDS has not previously been reported. We examined whether implementation of SBT/DSI protocols reduce duration of mechanical ventilation and ICU LOS in a retrospective group of subjects with ARDS at a large, urban, level-1 trauma center. All ARDS survivors from 2002 to 2016 ( N = 1,053) were partitioned into 2 groups: 397 in the pre-SBT/DSI group (June 2002-December 2007) and 656 in the post-SBT/DSI group (January 2009-April 2016). Patients from 2008, during the protocol implementation period, were excluded. An additional SBT protocol database (2008-2010) was used to assess the efficacy of SBT in transitioning subjects with ARDS to unassisted breathing. Comparisons were assessed by either unpaired t tests or Mann-Whitney tests. Multiple comparisons were made using either one-way analysis of variance or Kruskal-Wallis and Dunn's tests. Linear regression modeling was used to determine variables independently associated with mechanical ventilation duration and ICU LOS; differences were considered statistically significant when P < .05. Compared to the pre-protocol group, subjects with ARDS managed with SBT/DSI protocols experienced pronounced reductions both in median (IQR) mechanical ventilation duration (14 [6-29] vs 9 [4-17] d, respectively, P < .001) and median ICU LOS (18 [8-33] vs 13 [7-22] d, respectively P < .001). In the final model, only treatment in the SBT/DSI period and higher baseline respiratory system compliance were independently associated with reduced mechanical ventilation duration and ICU LOS. Among subjects with ARDS in the SBT performance database, most achieved unassisted breathing with a median of 2 SBTs. Evidenced-based protocols governing weaning and sedation practices were associated with both reduced mechanical ventilation duration and ICU LOS in subjects with ARDS. However, higher respiratory system compliance in the SBT/DSI cohort also contributed to these improved outcomes. Copyright © 2018 by Daedalus Enterprises.
2014-10-01
A.J. Mendez, S.L. Groah, J. Kressler. Fasting plasma glucose values may significantly underestimate prevalence of dysfunctional glycemic regulation in...taken corrective actions without undertaking additional protocol changes. There have been no screening issues since protocol amendments were approved...extending approval for the project through September 9, 2015. Enclosed is the dated/stamped Informed Consent Form, approved for one additional year until
Real-Time System Verification by Kappa-Induction
NASA Technical Reports Server (NTRS)
Pike, Lee S.
2005-01-01
We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
Connolly, Niamh M C; Theurey, Pierre; Adam-Vizi, Vera; Bazan, Nicolas G; Bernardi, Paolo; Bolaños, Juan P; Culmsee, Carsten; Dawson, Valina L; Deshmukh, Mohanish; Duchen, Michael R; Düssmann, Heiko; Fiskum, Gary; Galindo, Maria F; Hardingham, Giles E; Hardwick, J Marie; Jekabsons, Mika B; Jonas, Elizabeth A; Jordán, Joaquin; Lipton, Stuart A; Manfredi, Giovanni; Mattson, Mark P; McLaughlin, BethAnn; Methner, Axel; Murphy, Anne N; Murphy, Michael P; Nicholls, David G; Polster, Brian M; Pozzan, Tullio; Rizzuto, Rosario; Satrústegui, Jorgina; Slack, Ruth S; Swanson, Raymond A; Swerdlow, Russell H; Will, Yvonne; Ying, Zheng; Joselin, Alvin; Gioran, Anna; Moreira Pinho, Catarina; Watters, Orla; Salvucci, Manuela; Llorente-Folch, Irene; Park, David S; Bano, Daniele; Ankarcrona, Maria; Pizzo, Paola; Prehn, Jochen H M
2018-03-01
Neurodegenerative diseases are a spectrum of chronic, debilitating disorders characterised by the progressive degeneration and death of neurons. Mitochondrial dysfunction has been implicated in most neurodegenerative diseases, but in many instances it is unclear whether such dysfunction is a cause or an effect of the underlying pathology, and whether it represents a viable therapeutic target. It is therefore imperative to utilise and optimise cellular models and experimental techniques appropriate to determine the contribution of mitochondrial dysfunction to neurodegenerative disease phenotypes. In this consensus article, we collate details on and discuss pitfalls of existing experimental approaches to assess mitochondrial function in in vitro cellular models of neurodegenerative diseases, including specific protocols for the measurement of oxygen consumption rate in primary neuron cultures, and single-neuron, time-lapse fluorescence imaging of the mitochondrial membrane potential and mitochondrial NAD(P)H. As part of the Cellular Bioenergetics of Neurodegenerative Diseases (CeBioND) consortium ( www.cebiond.org ), we are performing cross-disease analyses to identify common and distinct molecular mechanisms involved in mitochondrial bioenergetic dysfunction in cellular models of Alzheimer's, Parkinson's, and Huntington's diseases. Here we provide detailed guidelines and protocols as standardised across the five collaborating laboratories of the CeBioND consortium, with additional contributions from other experts in the field.
Janssens, Barbara; De Visschere, Luc; van der Putten, Gert-Jan; de Lugt-Lustig, Kersti; Schols, Jos M G A; Vanobbergen, Jacques
2016-06-01
To explore the impact of a supervised implementation of an oral healthcare protocol, in addition to education, on nurses' and nurses' aides' oral health-related knowledge and attitude. A random sample of 12 nursing homes, accommodating a total of 120-150 residents, was obtained using stratified cluster sampling with replacement. The intervention included the implementation of an oral healthcare protocol and three different educational stages. One of the investigators supervised the implementation process, supported by a dental hygienist. A 34-item questionnaire was developed and validated to evaluate the knowledge and attitude of nurses and nurses' aides at baseline and 6 months after the start of the intervention. Linear mixed-model analyses were performed to explore differences in knowledge and attitude at 6 months after implementation. At baseline, no significant differences were observed between the intervention and the control group for both knowledge (p = 0.42) and attitude (p = 0.37). Six months after the start of the intervention, significant differences were found between the intervention and the control group for the variable knowledge in favour of the intervention group (p < 0.0001) but not for the variable attitude (p = 0.78). Out of the mixed model with attitude as the dependent variable, it can be concluded that age (p = 0.031), educational level (p = 0.009) and ward type (p = 0.014) have a significant effect. The mixed model with knowledge as the dependent variable resulted in a significant effect of the intervention (p = 0.001) and the educational level (p = 0.009). The supervised implementation of an oral healthcare protocol significantly increased the knowledge of nurses and nurses' aides. In contrast, no significant improvements could be demonstrated in attitude. © 2014 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.
48 CFR 1352.235-70 - Protection of human subjects.
Code of Federal Regulations, 2012 CFR
2012-10-01
... investigation, including research development, testing and evaluation, designed to develop or contribute to... subjects research protocol, all questionnaires, surveys, advertisements, and informed consent forms... addition, if the contractor modifies a human subjects research protocol, questionnaire, survey...
48 CFR 1352.235-70 - Protection of human subjects.
Code of Federal Regulations, 2011 CFR
2011-10-01
... investigation, including research development, testing and evaluation, designed to develop or contribute to... subjects research protocol, all questionnaires, surveys, advertisements, and informed consent forms... addition, if the contractor modifies a human subjects research protocol, questionnaire, survey...
48 CFR 1352.235-70 - Protection of human subjects.
Code of Federal Regulations, 2013 CFR
2013-10-01
... investigation, including research development, testing and evaluation, designed to develop or contribute to... subjects research protocol, all questionnaires, surveys, advertisements, and informed consent forms... addition, if the contractor modifies a human subjects research protocol, questionnaire, survey...
48 CFR 1352.235-70 - Protection of human subjects.
Code of Federal Regulations, 2014 CFR
2014-10-01
... investigation, including research development, testing and evaluation, designed to develop or contribute to... subjects research protocol, all questionnaires, surveys, advertisements, and informed consent forms... addition, if the contractor modifies a human subjects research protocol, questionnaire, survey...
2011-01-01
Purpose Eddy current induced velocity offsets are of concern for accuracy in cardiovascular magnetic resonance (CMR) volume flow quantification. However, currently known theoretical aspects of eddy current behavior have not led to effective guidelines for the optimization of flow quantification sequences. This study is aimed at identifying correlations between protocol parameters and the resulting velocity error in clinical CMR flow measurements in a multi-vendor study. Methods Nine 1.5T scanners of three different types/vendors were studied. Measurements were performed on a large stationary phantom. Starting from a clinical breath-hold flow protocol, several protocol parameters were varied. Acquisitions were made in three clinically relevant orientations. Additionally, a time delay between the bipolar gradient and read-out, asymmetric versus symmetric velocity encoding, and gradient amplitude and slew rate were studied in adapted sequences as exploratory measurements beyond the protocol. Image analysis determined the worst-case offset for a typical great-vessel flow measurement. Results The results showed a great variation in offset behavior among scanners (standard deviation among samples of 0.3, 0.4, and 0.9 cm/s for the three different scanner types), even for small changes in the protocol. Considering the absolute values, none of the tested protocol settings consistently reduced the velocity offsets below the critical level of 0.6 cm/s neither for all three orientations nor for all three scanner types. Using multilevel linear model analysis, oblique aortic and pulmonary slices showed systematic higher offsets than the transverse aortic slices (oblique aortic 0.6 cm/s, and pulmonary 1.8 cm/s higher than transverse aortic). The exploratory measurements beyond the protocol yielded some new leads for further sequence development towards reduction of velocity offsets; however those protocols were not always compatible with the time-constraints of breath-hold imaging and flow-related artefacts. Conclusions This study showed that with current systems there was no generic protocol which resulted into acceptable flow offset values. Protocol optimization would have to be performed on a per scanner and per protocol basis. Proper optimization might make accurate (transverse) aortic flow quantification possible for most scanners. Pulmonary flow quantification would still need further (offline) correction. PMID:21388521
NASA Technical Reports Server (NTRS)
Moore, J. Strother
1992-01-01
In this paper we present a formal model of asynchronous communication as a function in the Boyer-Moore logic. The function transforms the signal stream generated by one processor into the signal stream consumed by an independently clocked processor. This transformation 'blurs' edges and 'dilates' time due to differences in the phases and rates of the two clocks and the communications delay. The model can be used quantitatively to derive concrete performance bounds on asynchronous communications at ISO protocol level 1 (physical level). We develop part of the reusable formal theory that permits the convenient application of the model. We use the theory to show that a biphase mark protocol can be used to send messages of arbitrary length between two asynchronous processors. We study two versions of the protocol, a conventional one which uses cells of size 32 cycles and an unconventional one which uses cells of size 18. We conjecture that the protocol can be proved to work under our model for smaller cell sizes and more divergent clock rates but the proofs would be harder.
Improving practices in nanomedicine through near real-time pharmacokinetic analysis
NASA Astrophysics Data System (ADS)
Magafia, Isidro B.
More than a decade into the development of gold nanoparticles, with multiple clinical trials underway, ongoing pre-clinical research continues towards better understanding in vivo interactions. The goal is treatment optimization through improved best practices. In an effort to collect information for healthcare providers enabling informed decisions in a relevant time frame, instrumentation for real-time plasma concentration (multi-wavelength photoplethysmography) and protocols for rapid elemental analysis (energy dispersive X-Ray fluorescence) of biopsied tumor tissue have been developed in a murine model. An initial analysis, designed to demonstrate the robust nature and utility of the techniques, revealed that area under the bioavailability curve (AUC) alone does not currently inform tumor accumulation with a high degree of accuracy (R2=0.56), marginally better than injected dose (R2=0.46). This finding suggests that the control of additional experimental and physiological variables (chosen through modeling efforts) may yield more predictable tumor accumulation. Subject core temperature, blood pressure, and tumor perfusion are evaluated relative to particle uptake in a murine tumor model. New research efforts are also focused on adjuvant therapies that are employed to modify circulation parameters, including the AUC, of nanorods and gold nanoshells. Preliminary studies demonstrated a greater than 300% increase in average AUC using a reticuloendothelial blockade agent versus control groups. Given a better understanding of the relative importance of the physiological factors that influence rates of tumor accumulation, a set of experimental best practices is presented. This dissertation outlines the experimental protocols conducted, and discusses the real-world needs discovered and how these needs became specifications of developed protocols.
A toxicity cost function approach to optimal CPA equilibration in tissues.
Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali
2018-02-01
There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6 cm 2 /s, 1.2 × 10 -6 cm 2 /s and 1.3 × 10 -6 cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.
A Study of Shared-Memory Mutual Exclusion Protocols Using CADP
NASA Astrophysics Data System (ADS)
Mateescu, Radu; Serwe, Wendelin
Mutual exclusion protocols are an essential building block of concurrent systems: indeed, such a protocol is required whenever a shared resource has to be protected against concurrent non-atomic accesses. Hence, many variants of mutual exclusion protocols exist in the shared-memory setting, such as Peterson's or Dekker's well-known protocols. Although the functional correctness of these protocols has been studied extensively, relatively little attention has been paid to their non-functional aspects, such as their performance in the long run. In this paper, we report on experiments with the performance evaluation of mutual exclusion protocols using Interactive Markov Chains. Steady-state analysis provides an additional criterion for comparing protocols, which complements the verification of their functional properties. We also carefully re-examined the functional properties, whose accurate formulation as temporal logic formulas in the action-based setting turns out to be quite involved.
Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Crawford, Aladsair J.; Viswanathan, Vilayanur V.
2014-06-01
The Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems (PNNL-22010) was first issued in November 2012 as a first step toward providing a foundational basis for developing an initial standard for the uniform measurement and expression of energy storage system (ESS) performance. Its subsequent use in the field and review by the protocol working group and most importantly the users’ subgroup and the thermal subgroup has led to the fundamental modifications reflected in this update of the 2012 Protocol. As an update of the 2012 Protocol, this document (the June 2014 Protocol) is intended to supersedemore » its predecessor and be used as the basis for measuring and expressing ESS performance. The foreword provides general and specific details about what additions, revisions, and enhancements have been made to the 2012 Protocol and the rationale for them in arriving at the June 2014 Protocol.« less
Chen, Min; Sun, Jianwei
2017-09-18
The use of additives for organic synthesis has become a common tactic to improve the outcome of organic reactions. Herein, by using an organocatalytic process for the synthesis of chiral diarylmethyl alkynes as a platform, we describe how an additive is involved in the improvement of the process. The evolution of an excellent synthetic protocol has been achieved in three stages, from 1) initially no catalyst turnover, to 2) good conversion and enantioselectivity with a superior additive, and eventually 3) even better efficiency and selectivity without an additive. This study is an important and rare demonstration that understanding the role of additive can be so beneficial as to obviate the need for the additive. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Central and peripheral hemodynamic responses to passive limb movement: the role of arousal
Venturelli, Massimo; Amann, M.; McDaniel, J.; Trinity, J. D.; Fjeldstad, A. S.
2012-01-01
The exact role of arousal in central and peripheral hemodynamic responses to passive limb movement in humans is unclear but has been proposed as a potential contributor. Thus, we used a human model with no lower limb afferent feedback to determine the role of arousal on the hemodynamic response to passive leg movement. In nine people with a spinal cord injury, we compared central and peripheral hemodynamic and ventilatory responses to one-leg passive knee extension with and without visual feedback (M+VF and M-VF, respectively) as well as in a third trial with no movement or visual feedback but the perception of movement (F). Ventilation (V̇e), heart rate, stroke volume, cardiac output, mean arterial pressure, and leg blood flow (LBF) were evaluated during the three protocols. V̇e increased rapidly from baseline in M+VF (55 ± 11%), M-VF (63 ± 13%), and F (48 ± 12%) trials. Central hemodynamics (heart rate, stroke volume, cardiac output, and mean arterial pressure) were unchanged in all trials. LBF increased from baseline by 126 ± 18 ml/min in the M+VF protocol and 109 ± 23 ml/min in the M-VF protocol but was unchanged in the F protocol. Therefore, with the use of model that is devoid of afferent feedback from the legs, the results of this study reveal that, although arousal is invoked by passive movement or the thought of passive movement, as evidenced by the increase in V̇e, there is no central or peripheral hemodynamic impact of this increased neural activity. Additionally, this study revealed that a central hemodynamic response is not an obligatory component of movement-induced LBF. PMID:22003056
NASA Astrophysics Data System (ADS)
Sun, Xiaole; Djordjevic, Ivan B.; Neifeld, Mark A.
2016-03-01
Free-space optical (FSO) channels can be characterized by random power fluctuations due to atmospheric turbulence, which is known as scintillation. Weak coherent source based FSO quantum key distribution (QKD) systems suffer from the scintillation effect because during the deep channel fading the expected detection rate drops, which then gives an eavesdropper opportunity to get additional information about protocol by performing photon number splitting (PNS) attack and blocking single-photon pulses without changing QBER. To overcome this problem, in this paper, we study a large-alphabet QKD protocol, which is achieved by using pulse-position modulation (PPM)-like approach that utilizes the time-frequency uncertainty relation of the weak coherent photon state, called here TF-PPM-QKD protocol. We first complete finite size analysis for TF-PPM-QKD protocol to give practical bounds against non-negligible statistical fluctuation due to finite resources in practical implementations. The impact of scintillation under strong atmospheric turbulence regime is studied then. To overcome the secure key rate performance degradation of TF-PPM-QKD caused by scintillation, we propose an adaptation method for compensating the scintillation impact. By changing source intensity according to the channel state information (CSI), obtained by classical channel, the adaptation method improves the performance of QKD system with respect to the secret key rate. The CSI of a time-varying channel can be predicted using stochastic models, such as autoregressive (AR) models. Based on the channel state predictions, we change the source intensity to the optimal value to achieve a higher secret key rate. We demonstrate that the improvement of the adaptation method is dependent on the prediction accuracy.
A software defined RTU multi-protocol automatic adaptation data transmission method
NASA Astrophysics Data System (ADS)
Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng
2018-02-01
Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.
Nguyen, An Thi-Binh; Nigen, Michaël; Jimenez, Luciana; Ait-Abderrahim, Hassina; Marchesseau, Sylvie; Picart-Palmade, Laetitia
2018-01-15
Dextran or xanthan were used as model exocellular polysaccharides (EPS) to compare the extraction efficiency of EPS from skim milk acid gels using three different protocols. Extraction yields, residual protein concentrations and the macromolecular properties of extracted EPS were determined. For both model EPS, the highest extraction yield (∼80%) was obtained when samples were heated in acidic conditions at the first step of extraction (Protocol 1). Protocols that contained steps of acid/ethanol precipitation without heating (Protocols 2 and 3) show lower extraction yields (∼55%) but allow a better preservation of the EPS macromolecular properties. Changing the pH of acid gels up to 7 before extraction (Protocol 3) improved the extraction yield of anionic EPS without effect on the macromolecular properties of EPS. Protocol 1 was then applied for the quantification of EPS produced during the yogurt fermentation, while Protocol 3 was dedicated to their macromolecular characterization. Copyright © 2017 Elsevier Ltd. All rights reserved.
W. Sutton; E.M. Hansen; P. Reeser; A. Kanaskie
2008-01-01
Oregon was a participant in the pilot test of the national stream monitoring protocol for SOD. We routinely and continuously monitor about 50 streams in and near the SOD quarantine area in southwest Oregon using foliage baits. For the national protocol, we added six additional streams beyond the area of known infestation, and compared results from different diagnostic...
Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young
2016-01-01
Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.
A Survey on Underwater Acoustic Sensor Network Routing Protocols.
Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina
2016-03-22
Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research.
A Survey on Underwater Acoustic Sensor Network Routing Protocols
Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina
2016-01-01
Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research. PMID:27011193
2013-01-01
Background Dual sensory loss (DSL) has a negative impact on health and wellbeing and its prevalence is expected to increase due to demographic aging. However, specialized care or rehabilitation programs for DSL are scarce. Until now, low vision rehabilitation does not sufficiently target concurrent impairments in vision and hearing. This study aims to 1) develop a DSL protocol (for occupational therapists working in low vision rehabilitation) which focuses on optimal use of the senses and teaches DSL patients and their communication partners to use effective communication strategies, and 2) describe the multicenter parallel randomized controlled trial (RCT) designed to test the effectiveness and cost-effectiveness of the DSL protocol. Methods/design To develop a DSL protocol, literature was reviewed and content was discussed with professionals in eye/ear care (interviews/focus groups) and DSL patients (interviews). A pilot study was conducted to test and confirm the DSL protocol. In addition, a two-armed international multi-center RCT will evaluate the effectiveness and cost-effectiveness of the DSL protocol compared to waiting list controls, in 124 patients in low vision rehabilitation centers in the Netherlands and Belgium. Discussion This study provides a treatment protocol for rehabilitation of DSL within low vision rehabilitation, which aims to be a valuable addition to the general low vision rehabilitation care. Trial registration Netherlands Trial Register (NTR) identifier: NTR2843 PMID:23941667
Report on July 2015 Additional Protocol Coordinators Best Practices Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gitau, Ernest T.N.; Burbank, Roberta L.; Finch, Valerie A.
After 10 years of implementation experience, the Office of Nonproliferation and Arms Control (NPAC) within the Department of Energy/National Nuclear Security Administration (DOE/NNSA) conducted the Additional Protocol (AP) Coordinators Best Practices Workshop at Oak Ridge National Laboratory from July 29-30, 2015. The goal of this workshop was to identify implementation best practices, lessons learned, and compliance challenges from the various Additional Protocol Coordinators (APCs) at each laboratory in the DOE/NNSA complex and associated sites. The workshop provided the opportunity for participants to share their insights and establish networks that APCs can utilize to continue to discuss challenges (new and old),more » identify best practices, and enhance communication and coordination for reporting multi-lab research projects during review activities. Workshop participants included DOE/NNSA HQ, laboratory and site APCs, seasoned experts, members of the original implementation outreach team, and Field Element and site security representatives.« less
A Surgical Procedure for Resecting the Mouse Rib: A Model for Large-Scale Long Bone Repair
Funnell, John W.; Thein, Thu Zan Tun; Mariani, Francesca V.
2015-01-01
This protocol introduces researchers to a new model for large-scale bone repair utilizing the mouse rib. The procedure details the following: preparation of the animal for surgery, opening the thoracic body wall, exposing the desired rib from the surrounding intercostal muscles, excising the desired section of rib without inducing a pneumothorax, and closing the incisions. Compared to the bones of the appendicular skeleton, the ribs are highly accessible. In addition, no internal or external fixator is necessary since the adjacent ribs provide a natural fixation. The surgery uses commercially available supplies, is straightforward to learn, and well-tolerated by the animal. The procedure can be carried out with or without removing the surrounding periosteum, and therefore the contribution of the periosteum to repair can be assessed. Results indicate that if the periosteum is retained, robust repair occurs in 1 - 2 months. We expect that use of this protocol will stimulate research into rib repair and that the findings will facilitate the development of new ways to stimulate bone repair in other locations around the body. PMID:25651082
Upadhyay, Ushma D; Johns, Nicole E; Combellick, Sarah L; Kohn, Julia E; Keder, Lisa M; Roberts, Sarah C M
2016-08-01
In February 2011, an Ohio law took effect mandating use of the United States Food and Drug Administration (FDA)-approved protocol for mifepristone, which is used with misoprostol for medication abortion. Other state legislatures have passed or enacted similar laws requiring use of the FDA-approved protocol for medication abortion. The objective of this study is to examine the association of this legal change with medication abortion outcomes and utilization. We used a retrospective cohort design, comparing outcomes of medication abortion patients in the prelaw period to those in the postlaw period. Sociodemographic and clinical chart data were abstracted from all medication abortion patients from 1 y prior to the law's implementation (January 2010-January 2011) to 3 y post implementation (February 2011-October 2014) at four abortion-providing health care facilities in Ohio. Outcome data were analyzed for all women undergoing abortion at ≤49 d gestation during the study period. The main outcomes were as follows: need for additional intervention following medication abortion (such as aspiration, repeat misoprostol, and blood transfusion), frequency of continuing pregnancy, reports of side effects, and the proportion of abortions that were medication abortions (versus other abortion procedures). Among the 2,783 medication abortions ≤49 d gestation, 4.9% (95% CI: 3.7%-6.2%) in the prelaw and 14.3% (95% CI: 12.6%-16.0%) in the postlaw period required one or more additional interventions. Women obtaining a medication abortion in the postlaw period had three times the odds of requiring an additional intervention as women in the prelaw period (adjusted odds ratio [AOR] = 3.11, 95% CI: 2.27-4.27). In a mixed effects multivariable model that uses facility-months as the unit of analysis to account for lack of independence by site, we found that the law change was associated with a 9.4% (95% CI: 4.0%-18.4%) absolute increase in the rate of requiring an additional intervention. The most common subsequent intervention in both periods was an additional misoprostol dose and was most commonly administered to treat incomplete abortion. The percentage of women requiring two or more follow-up visits increased from 4.2% (95% CI: 3.0%-5.3%) in the prelaw period to 6.2% (95% CI: 5.5%-8.0%) in the postlaw period (p = 0.003). Continuing pregnancy was rare (0.3%). Overall, 12.6% of women reported at least one side effect during their medication abortion: 8.4% (95% CI: 6.8%-10.0%) in the prelaw period and 15.6% (95% CI: 13.8%-17.3%) in the postlaw period (p < 0.001). Medication abortions fell from 22% (95% CI: 20.8%-22.3%) of all abortions the year before the law went into effect (2010) to 5% (95% CI: 4.8%-5.6%) 3 y after (2014) (p < 0.001). The average patient charge increased from US$426 in 2010 to US$551 in 2014, representing a 16% increase after adjusting for inflation in medical prices. The primary limitation to the study is that it was a pre/post-observational study with no control group that was not exposed to the law. Ohio law required use of a medication abortion protocol that is associated with a greater need for additional intervention, more visits, more side effects, and higher costs for women relative to the evidence-based protocol. There is no evidence that the change in law led to improved abortion outcomes. Indeed, our findings suggest the opposite. In March 2016, the FDA-protocol was updated, so Ohio providers may now legally provide current evidence-based protocols. However, this law is still in place and bans physicians from using mifepristone based on any new developments in clinical research as best practices continue to be updated.
Thomas, Christoph; Krauss, Bernhard; Ketelsen, Dominik; Tsiflikas, Ilias; Reimann, Anja; Werner, Matthias; Schilling, David; Hennenlotter, Jörg; Claussen, Claus D; Schlemmer, Heinz-Peter; Heuschmid, Martin
2010-07-01
In dual energy (DE) computed tomography (CT), spectral shaping by additional filtration of the high energy spectrum can theoretically improve dual energy contrast. The aim of this in vitro study was to examine the influence of an additional tin filter for the differentiation of human urinary calculi by dual energy CT. A total of 36 pure human urinary calculi (uric acid, cystine, calciumoxalate monohydrate, calciumoxalate dihydrate, carbonatapatite, brushite, average diameter 10.5 mm) were placed in a phantom and imaged with 2 dual source CT scanners. One scanner was equipped with an additional tin (Sn) filter. Different combinations of tube voltages (140/80 kV, 140/100 kV, Sn140/100 kV, Sn140/80 kV, with Sn140 referring to 140 kV with the tin filter) were applied. Tube currents were adapted to yield comparable dose indices. Low- and high energy images were reconstructed. The calculi were segmented semiautomatically in the datasets and DE ratios (attenuation@low_kV/attenuation@high_kV) and were calculated for each calculus. DE contrasts (DE-ratio_material1/DE-ratio_material2) were computed for uric acid, cystine and calcified calculi and compared between the combinations of tube voltages. Using exclusively DE ratios, all uric acid, cystine and calcified calculi (as a group) could be differentiated in all protocols; the calcified calculi could not be differentiated among each other in any examination protocol. The highest DE ratios and DE contrasts were measured for the Sn140/80 protocol (53%-62% higher DE contrast than in the 140/80 kV protocol without additional filtration). The DE ratios and DE contrasts of the 80/140 kV and 100/Sn140 kV protocols were comparable. Uric acid, cystine and calcified calculi could be reliably differentiated by any of the protocols. A dose-neutral gain of DE contrast was found in the Sn-filter protocols, which might improve the differentiation of smaller calculi (Sn140/80 kV) and improve image quality and calculi differentiation in larger patients (Sn140/100 kV). However, even with the improved spectral separation of the Sn-filter protocols, the DE ratios of calcified calculi are not sufficiently distinct to allow a differentiation within this group.
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
Steady-State Groundwater Flow Model for Great Neck, Long Island, New York
NASA Astrophysics Data System (ADS)
Chowdhury, S. H.; Klinger, D.; Sallemi, B. M.
2001-12-01
This paper describes a comprehensive groundwater flow model for the Great Neck section of Long Island, New York. The hydrogeology of this section of Long Island is dominated by a buried erosional valley consisting of sediments comparable to the North Shore Confining Unit. This formation cross-cuts, thus is in direct hydraulic connection with the Upper Glacial, North Shore Confining Unit, Raritan Clay, and Lloyd aquifers. The Magothy aquifer is present only in remote southern sections of the model area. In addition, various lenses of coarser material from the overlying Upper Glacial aquifer are dispersed throughout the area. Data collection consisted of gathering various parameter values from existing USGS reports. Hydraulic conductivity, porosity, estimated recharge values, evapotranspiration, well locations, and water level data have all been gathered from the USGS Office located in Coram, New York. Appropriate modeling protocol was followed throughout the modeling process. The computer code utilized for solving this numerical model is Visual MODFLOW as manufactured by Waterloo Hydrogeologic. Calibration and a complete sensitivity analysis were conducted. Modeled results indicate that the groundwater flow direction is consistent with what is viewed onsite. In addition, the model is consistent in returning favorable parameter results to historical data.
Draft Plan to Develop Non-Intrusive Load Monitoring Test Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.
2015-09-29
This document presents a Draft Plan proposed to develop a common test protocol that can be used to evaluate the performance requirements of Non-Intrusive Load Monitoring. Development on the test protocol will be focused on providing a consistent method that can be used to quantify and compare the performance characteristics of NILM products. Elements of the protocols include specifications for appliances to be used, metrics, instrumentation, and a procedure to simulate appliance behavior during tests. In addition, three priority use cases for NILM will be identified and their performance requirements will specified.
Guedes-da-Silva, F. H.; Batista, D. G. J.; da Silva, C. F.; Meuser, M. B.; Simões-Silva, M. R.; de Araújo, J. S.; Ferreira, C. G.; Moreira, O. C.; Britto, C.; Lepesheva, G. I.
2015-01-01
The lack of translation between preclinical assays and clinical trials for novel therapies for Chagas disease (CD) indicates a need for more feasible and standardized protocols and experimental models. Here, we investigated the effects of treatment with benznidazole (Bz) and with the potent experimental T. cruzi CYP51 inhibitor VNI in mouse models of Chagas disease by using different animal genders and parasite strains and employing distinct types of therapeutic schemes. Our findings confirm that female mice are less vulnerable to the infection than males, show that male models are less susceptible to treatment with both Bz and VNI, and thus suggest that male models are much more suitable for selection of the most promising antichagasic agents. Additionally, we have found that preventive protocols (compound given at 1 dpi) result in higher treatment success rates, which also should be avoided during advanced steps of in vivo trials of novel anti-T. cruzi drug candidates. Another consideration is the relevance of immunosuppression methods in order to verify the therapeutic profile of novel compounds, besides the usefulness of molecular diagnostic tools (quantitative PCR) to ascertain compound efficacy in experimental animals. Our study aims to contribute to the development of more reliable methods and decision gates for in vivo assays of novel antiparasitic compounds in order to move them from preclinical to clinical trials for CD. PMID:26416857
Toward Synthesis, Analysis, and Certification of Security Protocols
NASA Technical Reports Server (NTRS)
Schumann, Johann
2004-01-01
Implemented security protocols are basically pieces of software which are used to (a) authenticate the other communication partners, (b) establish a secure communication channel between them (using insecure communication media), and (c) transfer data between the communication partners in such a way that these data only available to the desired receiver, but not to anyone else. Such an implementation usually consists of the following components: the protocol-engine, which controls in which sequence the messages of the protocol are sent over the network, and which controls the assembly/disassembly and processing (e.g., decryption) of the data. the cryptographic routines to actually encrypt or decrypt the data (using given keys), and t,he interface to the operating system and to the application. For a correct working of such a security protocol, all of these components must work flawlessly. Many formal-methods based techniques for the analysis of a security protocols have been developed. They range from using specific logics (e.g.: BAN-logic [4], or higher order logics [12] to model checking [2] approaches. In each approach, the analysis tries to prove that no (or at least not a modeled intruder) can get access to secret data. Otherwise, a scenario illustrating the &tack may be produced. Despite the seeming simplicity of security protocols ("only" a few messages are sent between the protocol partners in order to ensure a secure communication), many flaws have been detected. Unfortunately, even a perfect protocol engine does not guarantee flawless working of a security protocol, as incidents show. Many break-ins and security vulnerabilities are caused by exploiting errors in the implementation of the protocol engine or the underlying operating system. Attacks using buffer-overflows are a very common class of such attacks. Errors in the implementation of exception or error handling can open up additional vulnerabilities. For example, on a website with a log-in screen: multiple tries with invalid passwords caused the expected error message (too many retries). but let the user nevertheless pass. Finally, security can be compromised by silly implementation bugs or design decisions. In a commercial VPN software, all calls to the encryption routines were incidentally replaced by stubs, probably during factory testing. The product worked nicely. and the error (an open VPN) would have gone undetected, if a team member had not inspected the low-level traffic out of curiosity. Also, the use secret proprietary encryption routines can backfire, because such algorithms often exhibit weaknesses which can be exploited easily (see e.g., DVD encoding). Summarizing, there is large number of possibilities to make errors which can compromise the security of a protocol. In today s world with short time-to-market and the use of security protocols in open and hostile networks for safety-critical applications (e.g., power or air-traffic control), such slips could lead to catastrophic situations. Thus, formal methods and automatic reasoning techniques should not be used just for the formal proof of absence of an attack, but they ought to be used to provide an end-to-end tool-supported framework for security software. With such an approach all required artifacts (code, documentation, test cases) , formal analyses, and reliable certification will be generated automatically, given a single, high level specification. By a combination of program synthesis, formal protocol analysis, certification; and proof-carrying code, this goal is within practical reach, since all the important technologies for such an approach actually exist and only need to be assembled in the right way.
F-RAG: Generating Atomic Coordinates from RNA Graphs by Fragment Assembly.
Jain, Swati; Schlick, Tamar
2017-11-24
Coarse-grained models represent attractive approaches to analyze and simulate ribonucleic acid (RNA) molecules, for example, for structure prediction and design, as they simplify the RNA structure to reduce the conformational search space. Our structure prediction protocol RAGTOP (RNA-As-Graphs Topology Prediction) represents RNA structures as tree graphs and samples graph topologies to produce candidate graphs. However, for a more detailed study and analysis, construction of atomic from coarse-grained models is required. Here we present our graph-based fragment assembly algorithm (F-RAG) to convert candidate three-dimensional (3D) tree graph models, produced by RAGTOP into atomic structures. We use our related RAG-3D utilities to partition graphs into subgraphs and search for structurally similar atomic fragments in a data set of RNA 3D structures. The fragments are edited and superimposed using common residues, full atomic models are scored using RAGTOP's knowledge-based potential, and geometries of top scoring models is optimized. To evaluate our models, we assess all-atom RMSDs and Interaction Network Fidelity (a measure of residue interactions) with respect to experimentally solved structures and compare our results to other fragment assembly programs. For a set of 50 RNA structures, we obtain atomic models with reasonable geometries and interactions, particularly good for RNAs containing junctions. Additional improvements to our protocol and databases are outlined. These results provide a good foundation for further work on RNA structure prediction and design applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Advertisement-Based Energy Efficient Medium Access Protocols for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Ray, Surjya Sarathi
One of the main challenges that prevents the large-scale deployment of Wireless Sensor Networks (WSNs) is providing the applications with the required quality of service (QoS) given the sensor nodes' limited energy supplies. WSNs are an important tool in supporting applications ranging from environmental and industrial monitoring, to battlefield surveillance and traffic control, among others. Most of these applications require sensors to function for long periods of time without human intervention and without battery replacement. Therefore, energy conservation is one of the main goals for protocols for WSNs. Energy conservation can be performed in different layers of the protocol stack. In particular, as the medium access control (MAC) layer can access and control the radio directly, large energy savings is possible through intelligent MAC protocol design. To maximize the network lifetime, MAC protocols for WSNs aim to minimize idle listening of the sensor nodes, packet collisions, and overhearing. Several approaches such as duty cycling and low power listening have been proposed at the MAC layer to achieve energy efficiency. In this thesis, I explore the possibility of further energy savings through the advertisement of data packets in the MAC layer. In the first part of my research, I propose Advertisement-MAC or ADV-MAC, a new MAC protocol for WSNs that utilizes the concept of advertising for data contention. This technique lets nodes listen dynamically to any desired transmission and sleep during transmissions not of interest. This minimizes the energy lost in idle listening and overhearing while maintaining an adaptive duty cycle to handle variable loads. Additionally, ADV-MAC enables energy efficient MAC-level multicasting. An analytical model for the packet delivery ratio and the energy consumption of the protocol is also proposed. The analytical model is verified with simulations and is used to choose an optimal value of the advertisement period. Simulations show that the optimized ADV-MAC provides substantial energy gains (50% to 70% less than other MAC protocols for WSNs such as T-MAC and S-MAC for the scenarios investigated) while faring as well as T-MAC in terms of packet delivery ratio and latency. Although ADV-MAC provides substantial energy gains over S-MAC and T-MAC, it is not optimal in terms of energy savings because contention is done twice -- once in the Advertisement Period and once in the Data Period. In the next part of my research, the second contention in the Data Period is eliminated and the advantages of contention-based and TDMA-based protocols are combined to form Advertisement based Time-division Multiple Access (ATMA), a distributed TDMA-based MAC protocol for WSNs. ATMA utilizes the bursty nature of the traffic to prevent energy waste through advertisements and reservations for data slots. Extensive simulations and qualitative analysis show that with bursty traffic, ATMA outperforms contention-based protocols (S-MAC, T-MAC and ADV-MAC), a TDMA based protocol (TRAMA) and hybrid protocols (Z-MAC and IEEE 802.15.4). ATMA provides energy reductions of up to 80%, while providing the best packet delivery ratio (close to 100%) and latency among all the investigated protocols. Simulations alone cannot reflect many of the challenges faced by real implementations of MAC protocols, such as clock-drift, synchronization, imperfect physical layers, and irregular interference from other transmissions. Such issues may cripple a protocol that otherwise performs very well in software simulations. Hence, to validate my research, I conclude with a hardware implementation of the ATMA protocol on SORA (Software Radio), developed by Microsoft Research Asia. SORA is a reprogrammable Software Defined Radio (SDR) platform that satisfies the throughput and timing requirements of modern wireless protocols while utilizing the rich general purpose PC development environment. Experimental results obtained from the hardware implementation of ATMA closely mirror the simulation results obtained for a single hop network with 4 nodes.
[Analyzing consumer preference by using the latest semantic model for verbal protocol].
Tamari, Yuki; Takemura, Kazuhisa
2012-02-01
This paper examines consumers' preferences for competing brands by using a preference model of verbal protocols. Participants were 150 university students, who reported their opinions and feelings about McDonalds and Mos Burger (competing hamburger restaurants in Japan). Their verbal protocols were analyzed by using the singular value decomposition method, and the latent decision frames were estimated. The verbal protocols having a large value in the decision frames could be interpreted as showing attributes that consumers emphasize. Based on the estimated decision frames, we predicted consumers' preferences using the logistic regression analysis method. The results indicate that the decision frames projected from the verbal protocol data explained consumers' preferences effectively.
Thermal-vacuum response of polymer matrix composites in space
NASA Technical Reports Server (NTRS)
Tennyson, R. C.; Matthews, R.
1993-01-01
This report describes a thermal-vacuum outgassing model and test protocol for predicting outgassing times and dimensional changes for polymer matrix composites. Experimental results derived from 'control' samples are used to provide the basis for analytical predictions to compare with the outgassing response of Long Duration Exposure Facility (LDEF) flight samples. Coefficient of thermal expansion (CTE) data are also presented. In addition, an example is given illustrating the dimensional change of a 'zero' CTE laminate due to moisture outgassing.
Downing, Amanda; Mortimer, Molly; Hiers, Jill
2016-03-01
Warfarin is a high alert medication and a challenge to dose and monitor. Pharmacist-driven warfarin management has been shown to decrease the time international normalized ratio (INR) is out of range, which may reduce undesired outcomes. The purpose of this study is to assess the effect of the implementation of a pharmacist-driven warfarin management protocol on the achievement of therapeutic INRs. A warfarin management protocol was developed using evidence based literature and similar protocols from other institutions. Pharmacists utilized the protocol to provide patient specific warfarin dosing upon provider referral. To evaluate the protocol's impact, a retrospective chart review pre- and post-implementation was completed for admitted patients receiving warfarin. Three hundred twenty-seven charts were reviewed for pre- and post-implementation data. INRs within therapeutic range increased from 27.8% before protocol implementation to 38.5% after implementation. There was also a reduction in subtherapeutic INRs (55.3% pre to 39% post) and supratherapeutic INRs 5 or above (3.7% pre to 2.6% post). Supratherapeutic INRs between 3 and 5 did increase from 13.2% before protocol implementation to 19.9% in the pharmacist managed group. In addition to reducing the time to achievement of therapeutic INRs by 0.5 days, implementation of the protocol resulted in an increased the number of patients with at least one therapeutic INR during admission (35% pre to 40% post). The implementation of a pharmacist-driven warfarin dosing protocol increased therapeutic INRs, and decreased the time to therapeutic range, as well as the proportion of subtherapeutic INRs and supratherapeutic INRs 5 or greater. Additional benefits of the protocol include documentation of Joint Commission National Patient Safety Goal compliance, promotion of interdisciplinary collaboration and increased continuity of care. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Benson, Charles T.; Critser, John K.
2014-01-01
Optimization of cryopreservation protocols for cells and tissues requires accurate models of heat and mass transport. Model selection often depends on the configuration of the tissue. Here, a mathematical and conceptual model of water and solute transport for whole hamster pancreatic islets has been developed and experimentally validated incorporating fundamental biophysical data from previous studies on individual hamster islet cells while retaining whole-islet structural information. It describes coupled transport of water and solutes through the islet by three methods: intracellularly, intercellularly, and in combination. In particular we use domain decomposition techniques to couple a transmembrane flux model with an interstitial mass transfer model. The only significant undetermined variable is the cellular surface area which is in contact with the intercellularly transported solutes, Ais. The model was validated and Ais determined using a 3 × 3 factorial experimental design blocked for experimental day. Whole islet physical experiments were compared with model predictions at three temperatures, three perfusing solutions, and three islet size groups. A mean of 4.4 islets were compared at each of the 27 experimental conditions and found to correlate with a coefficient of determination of 0.87 ± 0.06 (mean ± S.D.). Only the treatment variable of perfusing solution was found to be significant (p < 0.05). We have devised a model that retains much of the intrinsic geometric configuration of the system, and thus fewer laboratory experiments are needed to determine model parameters and thus to develop new optimized cryopreservation protocols. Additionally, extensions to ovarian follicles and other concentric tissue structures may be made. PMID:24950195
Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass.
Paris, Michael T; Lafleur, Benoit; Dubin, Joel A; Mourtzakis, Marina
2017-10-01
Ultrasound is a non-invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four-site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole-body reference methods. Our primary objectives were to (i) compare the four-site protocol's ability to predict appendicular lean tissue mass from dual-energy X-ray absorptiometry; (ii) optimize the predictability of the four-site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. This observational cross-sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole-body dual-energy X-ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine-site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four-site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. The four-site protocol was strongly associated (R 2 = 0.72) with appendicular lean tissue mass, but Bland-Altman analysis displayed wide limits of agreement (-5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four-site protocol, improved the association (R 2 = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (-3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). The four-site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass. This optimized protocol can accurately identify low lean tissue mass, while still being easily applied at the bedside. © 2017 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf of the Society on Sarcopenia, Cachexia and Wasting Disorders.
Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass
Paris, Michael T.; Lafleur, Benoit; Dubin, Joel A.
2017-01-01
Abstract Background Ultrasound is a non‐invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four‐site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole‐body reference methods. Our primary objectives were to (i) compare the four‐site protocol's ability to predict appendicular lean tissue mass from dual‐energy X‐ray absorptiometry; (ii) optimize the predictability of the four‐site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. Methods This observational cross‐sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole‐body dual‐energy X‐ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine‐site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four‐site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. Results The four‐site protocol was strongly associated (R 2 = 0.72) with appendicular lean tissue mass, but Bland–Altman analysis displayed wide limits of agreement (−5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four‐site protocol, improved the association (R 2 = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (−3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). Conclusions The four‐site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass. This optimized protocol can accurately identify low lean tissue mass, while still being easily applied at the bedside. PMID:28722298
Research Supporting Satellite Communications Technology
NASA Technical Reports Server (NTRS)
Horan Stephen; Lyman, Raphael
2005-01-01
This report describes the second year of research effort under the grant Research Supporting Satellite Communications Technology. The research program consists of two major projects: Fault Tolerant Link Establishment and the design of an Auto-Configurable Receiver. The Fault Tolerant Link Establishment protocol is being developed to assist the designers of satellite clusters to manage the inter-satellite communications. During this second year, the basic protocol design was validated with an extensive testing program. After this testing was completed, a channel error model was added to the protocol to permit the effects of channel errors to be measured. This error generation was used to test the effects of channel errors on Heartbeat and Token message passing. The C-language source code for the protocol modules was delivered to Goddard Space Flight Center for integration with the GSFC testbed. The need for a receiver autoconfiguration capability arises when a satellite-to-ground transmission is interrupted due to an unexpected event, the satellite transponder may reset to an unknown state and begin transmitting in a new mode. During Year 2, we completed testing of these algorithms when noise-induced bit errors were introduced. We also developed and tested an algorithm for estimating the data rate, assuming an NRZ-formatted signal corrupted with additive white Gaussian noise, and we took initial steps in integrating both algorithms into the SDR test bed at GSFC.
Seismic Parameters of Mining-Induced Aftershock Sequences for Re-entry Protocol Development
NASA Astrophysics Data System (ADS)
Vallejos, Javier A.; Estay, Rodrigo A.
2018-03-01
A common characteristic of deep mines in hard rock is induced seismicity. This results from stress changes and rock failure around mining excavations. Following large seismic events, there is an increase in the levels of seismicity, which gradually decay with time. Restricting access to areas of a mine for enough time to allow this decay of seismic events is the main approach in re-entry strategies. The statistical properties of aftershock sequences can be studied with three scaling relations: (1) Gutenberg-Richter frequency magnitude, (2) the modified Omori's law (MOL) for the temporal decay, and (3) Båth's law for the magnitude of the largest aftershock. In this paper, these three scaling relations, in addition to the stochastic Reasenberg-Jones model are applied to study the characteristic parameters of 11 large magnitude mining-induced aftershock sequences in four mines in Ontario, Canada. To provide guidelines for re-entry protocol development, the dependence of the scaling relation parameters on the magnitude of the main event are studied. Some relations between the parameters and the magnitude of the main event are found. Using these relationships and the scaling relations, a space-time-magnitude re-entry protocol is developed. These findings provide a first approximation to concise and well-justified guidelines for re-entry protocol development applicable to the range of mining conditions found in Ontario, Canada.
Heavy vehicle driver workload assessment. Task 1, task analysis data and protocols review
DOT National Transportation Integrated Search
This report contains a review of available task analytic data and protocols pertinent to heavy vehicle operation and determination of the availability and relevance of such data to heavy vehicle driver workload assessment. Additionally, a preliminary...
Efficacy of Multimodal Pain Control Protocol in the Setting of Total Hip Arthroplasty
Lee, Kyung-Jae; Bae, Ki-Cheor; Cho, Chul-Hyun; Kwon, Doo-Hyun
2009-01-01
Background This study evaluated the benefits and safety of a multimodal pain control protocol, which included a periarticular injection of local anesthetics, in patients undergoing total hip arthroplasty. Methods Between March 2006 and March 2007, 60 patients undergoing unilateral total hip arthroplasty were randomized to undergo either a multimodal pain control protocol or a conventional pain control protocol. The following parameters were compared: the preoperative and postoperative visual analogue scales (VAS), hospital stay, operative time, postoperative rehabilitation, additional painkiller consumption, and complication rates. Results There was no difference between the groups in terms of diagnosis, age, gender, and BMI. Although both groups had similar VAS scores in the preoperative period and on the fifth postoperative day, there was a significant difference between the groups over the four-day period after surgery. There were no differences in the hospital stay, operative time, additional painkiller consumption, or complication rate between the groups. The average time for comfortable crutch ambulation was 2.8 days in the multimodal pain control protocol group and 5.3 days in the control group. Conclusions The multimodal pain control protocol can significantly reduce the level of postoperative pain and improve patients' satisfaction, with no apparent risks, after total hip arthroplasty. PMID:19885051
An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.
ERIC Educational Resources Information Center
Chen, I-Min A.; Markowitz, Victor M.
1995-01-01
Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…
Protocol for a Delay-Tolerant Data-Communication Network
NASA Technical Reports Server (NTRS)
Torgerson, Jordan; Hooke, Adrian; Burleigh, Scott; Fall, Kevin
2004-01-01
As its name partly indicates, the Delay-Tolerant Networking (DTN) Bundle Protocol is a protocol for delay-tolerant transmission of data via communication networks. This protocol was conceived as a result of studies of how to adapt Internet protocols so that Internet-like services could be provided across interplanetary distances in support of deep-space exploration. The protocol, and software to implement the protocol, is being developed in collaboration among experts at NASA's Jet Propulsion Laboratory and other institutions. No current Internet protocols can accommodate long transmission delay times or intermittent link connectivity. The DTN Bundle Protocol represents a departure from the standard Internet assumption that a continuous path is available from a host computer to a client computer: It provides for routing of data through networks that may be disjointed and may be characterized by long transmission delays. In addition to networks that include deepspace communication links, examples of such networks include terrestrial ones within which branches are temporarily disconnected. The protocol is based partly on the definition of a message-based overlay above the transport layers of the networks on which it is hosted.
Brown, Andrew D; Marotta, Thomas R
2017-02-01
Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Lin, Yang; Gil, Chang-Hyun; Yoder, Mervin C
2017-11-01
The emergence of induced pluripotent stem cell (iPSC) technology paves the way to generate large numbers of patient-specific endothelial cells (ECs) that can be potentially delivered for regenerative medicine in patients with cardiovascular disease. In the last decade, numerous protocols that differentiate EC from iPSC have been developed by many groups. In this review, we will discuss several common strategies that have been optimized for human iPSC-EC differentiation and subsequent studies that have evaluated the potential of human iPSC-EC as a cell therapy or as a tool in disease modeling. In addition, we will emphasize the importance of using in vivo vessel-forming ability and in vitro clonogenic colony-forming potential as a gold standard with which to evaluate the quality of human iPSC-EC derived from various protocols. © 2017 American Heart Association, Inc.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Phase space dynamics and control of the quantum particles associated to hypergraph states
NASA Astrophysics Data System (ADS)
Berec, Vesna
2015-05-01
As today's nanotechnology focus becomes primarily oriented toward production and manipulation of materials at the subatomic level, allowing the performance and complexity of interconnects where the device density accepts more than hundreds devices on a single chip, the manipulation of semiconductor nanostructures at the subatomic level sets its prime tasks on preserving and adequate transmission of information encoded in specified (quantum) states. The presented study employs the quantum communication protocol based on the hypergraph network model where the numerical solutions of equations of motion of quantum particles are associated to vertices (assembled with device chip), which follow specific controllable paths in the phase space. We address these findings towards ultimate quest for prediction and selective control of quantum particle trajectories. In addition, presented protocols could represent valuable tool for reducing background noise and uncertainty in low-dimensional and operationally meaningful, scalable complex systems.
High-throughput mouse genotyping using robotics automation.
Linask, Kaari L; Lo, Cecilia W
2005-02-01
The use of mouse models is rapidly expanding in biomedical research. This has dictated the need for the rapid genotyping of mutant mouse colonies for more efficient utilization of animal holding space. We have established a high-throughput protocol for mouse genotyping using two robotics workstations: a liquid-handling robot to assemble PCR and a microfluidics electrophoresis robot for PCR product analysis. This dual-robotics setup incurs lower start-up costs than a fully automated system while still minimizing human intervention. Essential to this automation scheme is the construction of a database containing customized scripts for programming the robotics workstations. Using these scripts and the robotics systems, multiple combinations of genotyping reactions can be assembled simultaneously, allowing even complex genotyping data to be generated rapidly with consistency and accuracy. A detailed protocol, database, scripts, and additional background information are available at http://dir.nhlbi.nih.gov/labs/ldb-chd/autogene/.
Evaluating anesthetic protocols for functional blood flow imaging in the rat eye
NASA Astrophysics Data System (ADS)
Moult, Eric M.; Choi, WooJhon; Boas, David A.; Baumann, Bernhard; Clermont, Allen C.; Feener, Edward P.; Fujimoto, James G.
2017-01-01
The purpose of this study is to evaluate the suitability of five different anesthetic protocols (isoflurane, isoflurane-xylazine, pentobarbital, ketamine-xylazine, and ketamine-xylazine-vecuronium) for functional blood flow imaging in the rat eye. Total retinal blood flow was measured at a series of time points using an ultrahigh-speed Doppler OCT system. Additionally, each anesthetic protocol was qualitatively evaluated according to the following criteria: (1) time-stability of blood flow, (2) overall rate of blood flow, (3) ocular immobilization, and (4) simplicity. We observed that different anesthetic protocols produced markedly different blood flows. Different anesthetic protocols also varied with respect to the four evaluated criteria. These findings suggest that the choice of anesthetic protocol should be carefully considered when designing and interpreting functional blood flow studies in the rat eye.
A Weak Value Based QKD Protocol Robust Against Detector Attacks
NASA Astrophysics Data System (ADS)
Troupe, James
2015-03-01
We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.
Microwave Protocols for Paraffin Microtechnique and In Situ Localization in Plants
NASA Astrophysics Data System (ADS)
Schichnes, Denise; Nemson, Jeff; Sohlberg, Lorraine; Ruzin, Steven E.
1998-10-01
: We have developed a microwave protocol for a paraffin-embedding microtechnique of the shoot apical meristem of ZEA MAYS and have successfully applied this protocol to other plant tissues. This protocol decreases the time required for all aspects of microtechnique tissue processing, including fixation (24 hr to 15 min), dehydration (73 hr to 10 min), and infiltration (96 hr to 3 hr). Additionally, the time required to adhere paraffin ribbons to gelatin-coated slides and for the Johanson's safranin O, fast green FCF staining protocol has been significantly decreased. Using this technique, the quality of tissue preservation and subsequent in situ localization of KNOTTED mRNA was increased by using microwaves.
Predicting the behavior of microfluidic circuits made from discrete elements
Bhargava, Krisna C.; Thompson, Bryant; Iqbal, Danish; Malmstadt, Noah
2015-01-01
Microfluidic devices can be used to execute a variety of continuous flow analytical and synthetic chemistry protocols with a great degree of precision. The growing availability of additive manufacturing has enabled the design of microfluidic devices with new functionality and complexity. However, these devices are prone to larger manufacturing variation than is typical of those made with micromachining or soft lithography. In this report, we demonstrate a design-for-manufacturing workflow that addresses performance variation at the microfluidic element and circuit level, in context of mass-manufacturing and additive manufacturing. Our approach relies on discrete microfluidic elements that are characterized by their terminal hydraulic resistance and associated tolerance. Network analysis is employed to construct simple analytical design rules for model microfluidic circuits. Monte Carlo analysis is employed at both the individual element and circuit level to establish expected performance metrics for several specific circuit configurations. A protocol based on osmometry is used to experimentally probe mixing behavior in circuits in order to validate these approaches. The overall workflow is applied to two application circuits with immediate use at on the bench-top: series and parallel mixing circuits that are modularly programmable, virtually predictable, highly precise, and operable by hand. PMID:26516059
Quantum protocols within Spekkens' toy model
NASA Astrophysics Data System (ADS)
Disilvestro, Leonardo; Markham, Damian
2017-05-01
Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Dodd-McCue, Diane; Tartaglia, Alexander
2005-01-01
The Family Communication Coordinator (FCC) Protocol was implemented to provide early family intervention and to facilitate effective communications during potential organ donation cases. Previous studies found the Protocol associated with improved donor outcome measures and with reduced role stress for ICU nurses caring for potential donors. The present study examines the impact of the Protocol on the perceived role stress of hospital chaplains serving as FCCs. All hospital chaplains serving as FCCs at an academic teaching hospital were surveyed. Their perceptions of job dimensions, role stress, job satisfaction, and commitment were measured; interviews and secondary data supplemented the surveys. The findings demonstrate that the FCC Protocol is associated with improved role stress, specifically role ambiguity and role conflict, among hospital chaplains serving as FCCs. Additionally, the findings suggest that satisfaction with the Protocol may be associated with experience with the Protocol.
Code of Federal Regulations, 2014 CFR
2014-07-01
... permitted under the Montreal Protocol or to receive from the person for the current control period some... production quantities: (A) The maximum production that the nation is allowed under the Protocol minus the...
A shorter and more specific oral sensitization-based experimental model of food allergy in mice.
Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica
2012-07-31
Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.
Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young
2016-01-01
Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.’s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.’s protocol and existing similar protocols. PMID:27163786
The Interlibrary Loan Protocol: An OSI Solution to ILL Messaging.
ERIC Educational Resources Information Center
Turner, Fay
1990-01-01
Discusses the interlibrary loan (ILL) protocol, a standard based on the principles of the Open Systems Interconnection (OSI) Reference Model. Benefits derived from protocol use are described, the status of the protocol as an international standard is reviewed, and steps taken by the National Library of Canada to facilitate migration to an ILL…
Cirrus: Inducing Subject Models from Protocol Data
1988-08-16
Protocol analysis is used routinely by psychologists and other behavior scientists, and more recently, by knowledge engineers who wish to embed the...knowledge of human experts in an expert system. However, protocol analysis is notoriously difficult and time comsuming . Several systems have been developed to...formal trace of it (a problem behavior graph). The system, however, did not produce an abstract model of the subject. Bhaskar and Simon (1977) avoided the
The relationship between Q gamma and Ca release from the sarcoplasmic reticulum in skeletal muscle
1991-01-01
Asymmetric membrane currents and fluxes of Ca2+ release were determined in skeletal muscle fibers voltage clamped in a Vaseline-gap chamber. The conditioning pulse protocol 1 for suppressing Ca2+ release and the "hump" component of charge movement current (I gamma), described in the first paper of this series, was applied at different test pulse voltages. The amplitude of the current suppressed during the ON transient reached a maximum at slightly suprathreshold test voltages (- 50 to -40 mV) and decayed at higher voltages. The component of charge movement current suppressed by 20 microM tetracaine also went through a maximum at low pulse voltages. This anomalous voltage dependence is thus a property of I gamma, defined by either the conditioning protocol or the tetracaine effect. A negative (inward-going) phase was often observed in the asymmetric current during the ON of depolarizing pulses. This inward phase was shown to be an intramembranous charge movement based on (a) its presence in the records of total membrane current, (b) its voltage dependence, with a maximum at slightly suprathreshold voltages, (c) its association with a "hump" in the asymmetric current, (d) its inhibition by interventions that reduce the "hump", (e) equality of ON and OFF areas in the records of asymmetric current presenting this inward phase, and (f) its kinetic relationship with the time derivative of Ca release flux. The nonmonotonic voltage dependence of the amplitude of the hump and the possibility of an inward phase of intramembranous charge movement are used as the main criteria in the quantitative testing of a specific model. According to this model, released Ca2+ binds to negatively charged sites on the myoplasmic face of the voltage sensor and increases the local transmembrane potential, thus driving additional charge movement (the hump). This model successfully predicts the anomalous voltage dependence and all the kinetic properties of I gamma described in the previous papers. It also accounts for the inward phase in total asymmetric current and in the current suppressed by protocol 1. According to this model, I gamma accompanies activating transitions at the same set of voltage sensors as I beta. Therefore it should open additional release channels, which in turn should cause more I gamma, providing a positive feedback mechanism in the regulation of calcium release. PMID:1650812
NASA Technical Reports Server (NTRS)
Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio
1992-01-01
This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.
Resource Tracking Model Updates and Trade Studies
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Moore, Michael
2016-01-01
The Resource tracking model has been updated to capture system manager and project manager inputs. Both the Trick/GUNNS RTM simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier reactor methane which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Additionally simulation of EVAs conducted from the exploration module was added. Since the focus of the exploration module is to provide a habitat during deep space operations the EVA simulation approach to EVA is based on ISS EVA protocol and processes. Case studies have been run to show the relative effect of performance changes on vehicle resources.
Dynamics of neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
UPM: unified policy-based network management
NASA Astrophysics Data System (ADS)
Law, Eddie; Saxena, Achint
2001-07-01
Besides providing network management to the Internet, it has become essential to offer different Quality of Service (QoS) to users. Policy-based management provides control on network routers to achieve this goal. The Internet Engineering Task Force (IETF) has proposed a two-tier architecture whose implementation is based on the Common Open Policy Service (COPS) protocol and Lightweight Directory Access Protocol (LDAP). However, there are several limitations to this design such as scalability and cross-vendor hardware compatibility. To address these issues, we present a functionally enhanced multi-tier policy management architecture design in this paper. Several extensions are introduced thereby adding flexibility and scalability. In particular, an intermediate entity between the policy server and policy rule database called the Policy Enforcement Agent (PEA) is introduced. By keeping internal data in a common format, using a standard protocol, and by interpreting and translating request and decision messages from multi-vendor hardware, this agent allows a dynamic Unified Information Model throughout the architecture. We have tailor-made this unique information system to save policy rules in the directory server and allow executions of policy rules with dynamic addition of new equipment during run-time.
Dynamics of neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less
Prigoff, Jake G; Swain, Gary W; Divino, Celia M
2016-05-01
Predicting the presence of a persistent common bile duct (CBD) stone is a difficult and expensive task. The aim of this study is to determine if a previously described protocol-based scoring system is a cost-effective strategy. The protocol includes all patients with gallstone pancreatitis and stratifies them based on laboratory values and imaging to high, medium, and low likelihood of persistent stones. The patient's stratification then dictates the next course of management. A decision analytic model was developed to compare the costs for patients who followed the protocol versus those that did not. Clinical data model inputs were obtained from a prospective study conducted at The Mount Sinai Medical Center to validate the protocol from Oct 2009 to May 2013. The study included all patients presenting with gallstone pancreatitis regardless of disease severity. Seventy-three patients followed the proposed protocol and 32 did not. The protocol group cost an average of $14,962/patient and the non-protocol group cost $17,138/patient for procedural costs. Mean length of stay for protocol and non-protocol patients was 5.6 and 7.7 days, respectively. The proposed protocol is a cost-effective way to determine the course for patients with gallstone pancreatitis, reducing total procedural costs over 12 %.
Internet-Protocol-Based Satellite Bus Architecture Designed
NASA Technical Reports Server (NTRS)
Slywczak, Richard A.
2004-01-01
NASA is designing future complex satellite missions ranging from single satellites and constellations to space networks and sensor webs. These missions require more interoperability, autonomy, and coordination than previous missions; in addition, a desire exists to have scientists retrieve data directly from the satellite rather than a central distribution source. To meet these goals, NASA has been studying the possibility of extending the Transmission Control Protocol/Internet Protocol (TCP/IP) suite for spacebased applications.
Broadening and Simplifying the First SETI Protocol
NASA Astrophysics Data System (ADS)
Michaud, M. A. G.
The Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence, known informally as the First SETI Protocol, is the primary existing international guidance on this subject. During the fifteen years since the document was issued, several people have suggested revisions or additional protocols. This article proposes a broadened and simplified text that would apply to the detection of alien technology in our solar system as well as to electromagnetic signals from more remote sources.
Vignion-Dewalle, Anne-Sophie; Baert, Gregory; Devos, Laura; Thecua, Elise; Vicentini, Claire; Mortier, Laurent; Mordon, Serge
2017-09-01
Photodynamic therapy (PDT) is an emerging treatment modality for various diseases, especially for dermatological conditions. Although, the standard PDT protocol for the treatment of actinic keratoses in Europe has shown to be effective, treatment-associated pain is often observed in patients. Different modifications to this protocol attempted to decrease pain have been investigated. The decrease in fluence rate seems to be a promising solution. Moreover, it has been suggested that light fractionation significantly increases the efficacy of PDT. Based on a flexible light-emitting textile, the FLEXITHERALIGHT device specifically provides a fractionated illumination at a fluence rate more than six times lower than that of the standard protocol. In a recently completed clinical trial of PDT for the treatment of actinic keratosis, the non-inferiority of a protocol involving illumination with the FLEXITHERALIGHT device after a short incubation time and referred to as the FLEXITHERALIGHT protocol has been assessed compared to the standard protocol. In this paper, we propose a comparison of the two above mentioned 635 nm red light protocols with 37 J/cm 2 in the PDT treatment of actinic keratosis: the standard protocol and the FLEXITHERALIGHT one through a mathematical modeling. This mathematical modeling, which slightly differs from the one we have already published, enables the local damage induced by the therapy to be estimated. The comparison performed in terms of the local damage induced by the therapy demonstrates that the FLEXITHERALIGHT protocol with lower fluence rate, light fractionation and shorter incubation time is somewhat less efficient than the standard protocol. Nevertheless, from the clinical trial results, the FLEXITHERALIGHT protocol results in non-inferior response rates compared to the standard protocol. This finding raises the question of whether the PDT local damage achieved by the FLEXITHERALIGHT protocol (respectively, the standard protocol) is sufficient (respectively, excessive) to destroy actinic keratosis cells. Lasers Surg. Med. 49:686-697, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Modelling and regulating of cardio-respiratory response for the enhancement of interval training
2014-01-01
Background The interval training method has been a well known exercise protocol which helps strengthen and improve one’s cardiovascular fitness. Purpose To develop an effective training protocol to improve cardiovascular fitness based on modelling and analysis of Heart Rate (HR) and Oxygen Uptake (VO2) dynamics. Methods In order to model the cardiorespiratory response to the onset and offset exercises, the (K4b2, Cosmed) gas analyzer was used to monitor and record the heart rate and oxygen uptake for ten healthy male subjects. An interval training protocol was developed for young health users and was simulated using a proposed RC switching model which was presented to accommodate the variations of the cardiorespiratory dynamics to running exercises. A hybrid system model was presented to describe the adaptation process and a multi-loop PI control scheme was designed for the tuning of interval training regime. Results By observing the original data for each subject, we can clearly identify that all subjects have similar HR and VO2 profiles. The proposed model is capable to simulate the exercise responses during onset and offset exercises; it ensures the continuity of the outputs within the interval training protocol. Under some mild assumptions, a hybrid system model can describe the adaption process and accordingly a multi-loop PI controller can be designed for the tuning of interval training protocol. The self-adaption feature of the proposed controller gives the exerciser the opportunity to reach his desired setpoints after a certain number of training sessions. Conclusions The established interval training protocol targets a range of 70-80% of HRmax which is mainly a training zone for the purpose of cardiovascular system development and improvement. Furthermore, the proposed multi-loop feedback controller has the potential to tune the interval training protocol according to the feedback from an individual exerciser. PMID:24499131
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ana J. Molinari; Emiliano C. C. Pozzi; Andrea Monti Hughes
In the present study we evaluated the therapeutic effect and/or potential radiotoxicity of the novel “Tandem” Boron Neutron Capture Therapy (T-BNCT) for the treatment of oral cancer in the hamster cheek pouch model at RA-3 Nuclear Reactor. Two groups of animals were treated with “Tandem BNCT”, i.e. BNCT mediated by boronophenylalanine (BPA) followed by BNCT mediated by sodium decahydrodecaborate (GB-10) either 24 h (T-24h-BNCT) or 48 h (T-48h-BNCT) later. A total tumor dose-matched single application of BNCT mediated by BPA and GB-10 administered jointly [(BPA + GB-10)-BNCT] was administered to an additional group of animals. At 28 days post-treatment, T-24h-BNCTmore » and T-48h-BNCT induced, respectively, overall tumor control (OTC) of 95% and 91%, with no statistically significant differences between protocols. Tumor response for the single application of (BPA + GB-10)-BNCT was 75%, significantly lower than for T-BNCT. The T-BNCT protocols and (BPA + GB-10)-BNCT induced reversible mucositis in dose-limiting precancerous tissue around treated tumors, reaching Grade 3/4 mucositis in 47% and 60% of the animals respectively. No normal tissue radiotoxicity was associated to tumor control for any of the protocols. “Tandem” BNCT enhances tumor control in oral cancer and reduces or, at worst, does not increase, mucositis in dose-limiting precancerous tissue.« less
Study of the development of fetal baboon brain using magnetic resonance imaging at 3 Tesla
Liu, Feng; Garland, Marianne; Duan, Yunsuo; Stark, Raymond I.; Xu, Dongrong; Dong, Zhengchao; Bansal, Ravi; Peterson, Bradley S.; Kangarlu, Alayar
2008-01-01
Direct observational data on the development of the brains of human and nonhuman primates is on remarkably scant, and most of our understanding of primate brain development is extrapolated from findings in rodent models. Magnetic resonance imaging (MRI) is a promising tool for the noninvasive, longitudinal study of the developing primate brain. We devised a protocol to scan pregnant baboons serially at 3 T for up to 3 h per session. Seven baboons were scanned 1–6 times, beginning as early as 56 days post-conceptional age, and as late as 185 days (term ~185 days). Successful scanning of the fetal baboon required careful animal preparation and anesthesia, in addition to optimization of the scanning protocol. We successfully acquired maps of relaxation times (T1 and T2) and high-resolution anatomical images of the brains of fetal baboons at multiple time points during the course of gestation. These images demonstrated the convergence of gray and white matter contrast near term, and furthermore demonstrated that the loss of contrast at that age is a consequence of the continuous change in relaxation times during fetal brain development. These data furthermore demonstrate that maps of relaxation times have clear advantages over the relaxation time weighted images for the tracking of the changes in brain structure during fetal development. This protocol for in utero MRI of fetal baboon brains will help to advance the use of nonhuman primate models to study fetal brain development longitudinally. PMID:18155925
IVOA Credential Delegation Protocol Version 1.0
NASA Astrophysics Data System (ADS)
Plante, Raymond; Graham, Matthew; Rixon, Guy; Taffoni, Giuliano; Plante, Raymond; Graham, Matthew
2010-02-01
The credential delegation protocol allows a client program to delegate a user's credentials to a service such that that service may make requests of other services in the name of that user. The protocol defines a REST service that works alongside other IVO services to enable such a delegation in a secure manner. In addition to defining the specifics of the service protocol, this document describes how a delegation service is registered in an IVOA registry along with the services it supports. The specification also explains how one can determine from a service registration that it requires the use of a supporting delegation service.
Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose
2010-01-01
The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.
Running key mapping in a quantum stream cipher by the Yuen 2000 protocol
NASA Astrophysics Data System (ADS)
Shimizu, Tetsuya; Hirota, Osamu; Nagasako, Yuki
2008-03-01
A quantum stream cipher by Yuen 2000 protocol (so-called Y00 protocol or αη scheme) consisting of linear feedback shift register of short key is very attractive in implementing secure 40 Gbits/s optical data transmission, which is expected as a next-generation network. However, a basic model of the Y00 protocol with a very short key needs a careful design against fast correlation attacks as pointed out by Donnet This Brief Report clarifies an effectiveness of irregular mapping between running key and physical signals in the driver for selection of M -ary basis in the transmitter, and gives a design method. Consequently, quantum stream cipher by the Y00 protocol with our mapping has immunity against the proposed fast correlation attacks on a basic model of the Y00 protocol even if the key is very short.
Cabrera-Aguilera, Ignacio; Rizo-Roca, David; Marques, Elisa A; Santocildes, Garoa; Pagès, Teresa; Viscor, Gines; Ascensão, António A; Magalhães, José; Torrella, Joan Ramon
2018-06-29
Cabrera-Aguilera, Ignacio, David Rizo-Roca, Elisa A. Marques, Garoa Santocildes, Teresa Pagès, Gines Viscor, António A. Ascensão, José Magalhães, and Joan Ramon Torrella. Additive effects of intermittent hypobaric hypoxia and endurance training on bodyweight, food intake, and oxygen consumption in rats. High Alt Med Biol 00:000-000, 2018.-We used an animal model to elucidate the effects of an intermittent hypobaric hypoxia (IHH) and endurance exercise training (EET) protocol on bodyweight (BW), food and water intake, and oxygen consumption. Twenty-eight young adult male rats were divided into four groups: normoxic sedentary (NS), normoxic exercised (NE), hypoxic sedentary (HS), and hypoxic exercised (HE). Normoxic groups were maintained at an atmospheric pressure equivalent to sea level, whereas the IHH protocol consisted of 5 hours per day for 33 days at a simulated altitude of 6000 m. Exercised groups ran in normobaric conditions on a treadmill for 1 hour/day for 5 weeks at a speed of 25 m/min. At the end of the protocol, both hypoxic groups showed significant decreases in BW from the ninth day of exposure, reaching final 10% (HS) to 14.5% (HE) differences when compared with NS. NE rats also showed a significant weight reduction after the 19th day, with a decrease of 7.4%. The BW of hypoxic animals was related to significant hypophagia elicited by IHH exposure (from 8% to 12%). In contrast, EET had no effect on food ingestion. Total water intake was not affected by hypoxia but was significantly increased by exercise. An analysis of oxygen consumption at rest (mL O 2 /[kg·min]) revealed two findings: a significant decrease in both hypoxic groups after the protocol (HS, 21.7 ± 0.70 vs. 19.1 ± 0.78 and HE, 22.8 ± 0.80 vs. 17.1 ± 0.90) and a significant difference at the end of the protocol between NE (21.3 ± 0.77) and HE (17.1 ± 0.90). These results demonstrate that IHH and EET had an additive effect on BW loss, providing evidence that rats underwent a metabolic adaptation through a reduction in oxygen consumption measured under normoxic conditions. These data suggest that the combination of IHH and EET could serve as an alternative treatment for the management of overweight and obesity.
Swartz, Elliot W; Baek, Jaeyun; Pribadi, Mochtar; Wojta, Kevin J; Almeida, Sandra; Karydas, Anna; Gao, Fen-Biao; Miller, Bruce L; Coppola, Giovanni
2016-11-01
: Induced pluripotent stem cells (iPSCs) offer an unlimited resource of cells to be used for the study of underlying molecular biology of disease, therapeutic drug screening, and transplant-based regenerative medicine. However, methods for the directed differentiation of skeletal muscle for these purposes remain scarce and incomplete. Here, we present a novel, small molecule-based protocol for the generation of multinucleated skeletal myotubes using eight independent iPSC lines. Through combinatorial inhibition of phosphoinositide 3-kinase (PI3K) and glycogen synthase kinase 3β (GSK3β) with addition of bone morphogenic protein 4 (BMP4) and fibroblast growth factor 2 (FGF2), we report up to 64% conversion of iPSCs into the myogenic program by day 36 as indicated by MYOG + cell populations. These cells began to exhibit spontaneous contractions as early as 34 days in vitro in the presence of a serum-free medium formulation. We used this protocol to obtain iPSC-derived muscle cells from frontotemporal dementia (FTD) patients harboring C9orf72 hexanucleotide repeat expansions (rGGGGCC), sporadic FTD, and unaffected controls. iPSCs derived from rGGGGCC carriers contained RNA foci but did not vary in differentiation efficiency when compared to unaffected controls nor display mislocalized TDP-43 after as many as 120 days in vitro. This study presents a rapid, efficient, and transgene-free method for generating multinucleated skeletal myotubes from iPSCs and a resource for further modeling the role of skeletal muscle in amyotrophic lateral sclerosis and other motor neuron diseases. Protocols to produce skeletal myotubes for disease modeling or therapy are scarce and incomplete. The present study efficiently generates functional skeletal myotubes from human induced pluripotent stem cells using a small molecule-based approach. Using this strategy, terminal myogenic induction of up to 64% in 36 days and spontaneously contractile myotubes within 34 days were achieved. Myotubes derived from patients carrying the C9orf72 repeat expansion show no change in differentiation efficiency and normal TDP-43 localization after as many as 120 days in vitro when compared to unaffected controls. This study provides an efficient, novel protocol for the generation of skeletal myotubes from human induced pluripotent stem cells that may serve as a valuable tool in drug discovery and modeling of musculoskeletal and neuromuscular diseases. ©AlphaMed Press.
Template-based protein-protein docking exploiting pairwise interfacial residue restraints.
Xue, Li C; Rodrigues, João P G L M; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J
2017-05-01
Although many advanced and sophisticated ab initio approaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to exploit template information in the modeling process. Here, we systematically evaluate and benchmark a TBM method that uses conserved interfacial residue pairs as docking distance restraints [referred to as alpha carbon-alpha carbon (CA-CA)-guided docking]. We compare it with two other template-based protein-protein modeling approaches, including a conserved non-pairwise interfacial residue restrained docking approach [referred to as the ambiguous interaction restraint (AIR)-guided docking] and a simple superposition-based modeling approach. Our results show that, for most cases, the CA-CA-guided docking method outperforms both superposition with refinement and the AIR-guided docking method. We emphasize the superiority of the CA-CA-guided docking on cases with medium to large conformational changes, and interactions mediated through loops, tails or disordered regions. Our results also underscore the importance of a proper refinement of superimposition models to reduce steric clashes. In summary, we provide a benchmarked TBM protocol that uses conserved pairwise interface distance as restraints in generating realistic 3D protein-protein interaction models, when reliable templates are available. The described CA-CA-guided docking protocol is based on the HADDOCK platform, which allows users to incorporate additional prior knowledge of the target system to further improve the quality of the resulting models. © The Author 2016. Published by Oxford University Press.
Addition of senna improves quality of colonoscopy preparation with magnesium citrate.
Vradelis, Stergios; Kalaitzakis, Evangelos; Sharifi, Yalda; Buchel, Otto; Keshav, Satish; Chapman, Roger W; Braden, Barbara
2009-04-14
To prospectively investigate the effectiveness and patient's tolerance of two low-cost bowel cleansing preparation protocols based on magnesium citrate only or the combination of magnesium citrate and senna. A total of 342 patients who were referred for colonoscopy underwent a colon cleansing protocol with magnesium citrate alone (n = 160) or magnesium citrate and senna granules (n = 182). The colonoscopist rated the overall efficacy of colon cleansing using an established score on a 4-point scale. Patients were questioned before undergoing colonoscopy for side effects and symptoms during bowel preparation. The percentage of procedures rescheduled because of insufficient colon cleansing was 7% in the magnesium citrate group and 4% in the magnesium citrate/senna group (P = 0.44). Adequate visualization of the colonic mucosa was rated superior under the citramag/senna regimen (P = 0.004). Both regimens were well tolerated, and did not significantly differ in the occurrence of nausea, bloating or headache. However, abdominal cramps were observed more often under the senna protocol (29.2%) compared to the magnesium citrate only protocol (9.9%, P < 0.0003). The addition of senna to the bowel preparation protocol with magnesium citrate significantly improves the cleansing outcome.
Addition of senna improves quality of colonoscopy preparation with magnesium citrate
Vradelis, Stergios; Kalaitzakis, Evangelos; Sharifi, Yalda; Buchel, Otto; Keshav, Satish; Chapman, Roger W; Braden, Barbara
2009-01-01
AIM: To prospectively investigate the effectiveness and patient’s tolerance of two low-cost bowel cleansing preparation protocols based on magnesium citrate only or the combination of magnesium citrate and senna. METHODS: A total of 342 patients who were referred for colonoscopy underwent a colon cleansing protocol with magnesium citrate alone (n = 160) or magnesium citrate and senna granules (n = 182). The colonoscopist rated the overall efficacy of colon cleansing using an established score on a 4-point scale. Patients were questioned before undergoing colonoscopy for side effects and symptoms during bowel preparation. RESULTS: The percentage of procedures rescheduled because of insufficient colon cleansing was 7% in the magnesium citrate group and 4% in the magnesium citrate/senna group (P = 0.44). Adequate visualization of the colonic mucosa was rated superior under the citramag/senna regimen (P = 0.004). Both regimens were well tolerated, and did not significantly differ in the occurrence of nausea, bloating or headache. However, abdominal cramps were observed more often under the senna protocol (29.2%) compared to the magnesium citrate only protocol (9.9%, P < 0.0003). CONCLUSION: The addition of senna to the bowel preparation protocol with magnesium citrate significantly improves the cleansing outcome. PMID:19360920
Rodrigues, Daniele Bobrowski; Mariutti, Lilian Regina Barros; Mercadante, Adriana Zerlotti
2016-12-07
In vitro digestion methods are a useful approach to predict the bioaccessibility of food components and overcome some limitations or disadvantages associated with in vivo methodologies. Recently, the INFOGEST network published a static method of in vitro digestion with a proposal for assay standardization. The INFOGEST method is not specific for any food component; therefore, we aimed to adapt this method to assess the in vitro bioaccessibility of carotenoids and carotenoid esters in a model fruit (Byrsonima crassifolia). Two additional steps were coupled to the in vitro digestion procedure, centrifugation at 20 000g for the separation of the aqueous phase containing mixed micelles and exhaustive carotenoid extraction with an organic solvent. The effect of electrolytes, enzymes and bile acids on carotenoid micellarization and stability was also tested. The results were compared with those found with a simpler method that has already been used for carotenoid bioaccessibility analysis. These values were in the expected range for free carotenoids (5-29%), monoesters (9-26%) and diesters (4-28%). In general, the in vitro bioaccessibility of carotenoids assessed by the adapted INFOGEST method was significantly higher (p < 0.05) than those assessed by the simplest protocol, with or without the addition of simulated fluids. Although no trend was observed, differences in bioaccessibility values depended on the carotenoid form (free, monoester or diester), isomerization (Z/E) and the in vitro digestion protocol. To the best of our knowledge, it was the first time that a systematic identification of carotenoid esters by HPLC-DAD-MS/MS after in vitro digestion using the INFOGEST protocol was carried out.
A new rapid kindling variant for induction of cortical epileptogenesis in freely moving rats
Morales, Juan Carlos; Álvarez-Ferradas, Carla; Roncagliolo, Manuel; Fuenzalida, Marco; Wellmann, Mario; Nualart, Francisco Javier; Bonansco, Christian
2014-01-01
Kindling, one of the most used models of experimental epilepsy is based on daily electrical stimulation in several brain structures. Unlike the classic or slow kindling protocols (SK), the rapid kindling types (RK) described until now require continuous stimulation at suprathreshold intensities applied directly to the same brain structure used for subsequent electrophysiological and immunohistochemical studies, usually the hippocampus. However, the cellular changes observed in these rapid protocols, such as astrogliosis and neuronal loss, could be due to experimental manipulation more than to epileptogenesis-related alterations. Here, we developed a new RK protocol in order to generate an improved model of temporal lobe epilepsy (TLE) which allows gradual progression of the epilepsy as well as obtaining an epileptic hippocampus, thus avoiding direct surgical manipulation and electric stimulation over this structure. This new protocol consists of basolateral amygdala (BLA) stimulation with 10 trains of biphasic pulses (10 s; 50 Hz) per day with 20 min-intervals, during 3 consecutive days, using a subconvulsive and subthreshold intensity, which guarantees tissue integrity. The progression of epileptic activity was evaluated in freely moving rats through electroencephalographic (EEG) recordings from cortex and amygdala, accompanied with synchronized video recordings. Moreover, we assessed the effectiveness of RK protocol and the establishment of epilepsy by evaluating cellular alterations of hippocampal slices from kindled rats. RK protocol induced convulsive states similar to SK protocols but in 3 days, with persistently lowered threshold to seizure induction and epileptogenic-dependent cellular changes in amygdala projection areas. We concluded that this novel RK protocol introduces a new variant of the chronic epileptogenesis models in freely moving rats, which is faster, highly reproducible and causes minimum cell damage with respect to that observed in other experimental models of epilepsy. PMID:25100948
A new rapid kindling variant for induction of cortical epileptogenesis in freely moving rats.
Morales, Juan Carlos; Alvarez-Ferradas, Carla; Roncagliolo, Manuel; Fuenzalida, Marco; Wellmann, Mario; Nualart, Francisco Javier; Bonansco, Christian
2014-01-01
Kindling, one of the most used models of experimental epilepsy is based on daily electrical stimulation in several brain structures. Unlike the classic or slow kindling protocols (SK), the rapid kindling types (RK) described until now require continuous stimulation at suprathreshold intensities applied directly to the same brain structure used for subsequent electrophysiological and immunohistochemical studies, usually the hippocampus. However, the cellular changes observed in these rapid protocols, such as astrogliosis and neuronal loss, could be due to experimental manipulation more than to epileptogenesis-related alterations. Here, we developed a new RK protocol in order to generate an improved model of temporal lobe epilepsy (TLE) which allows gradual progression of the epilepsy as well as obtaining an epileptic hippocampus, thus avoiding direct surgical manipulation and electric stimulation over this structure. This new protocol consists of basolateral amygdala (BLA) stimulation with 10 trains of biphasic pulses (10 s; 50 Hz) per day with 20 min-intervals, during 3 consecutive days, using a subconvulsive and subthreshold intensity, which guarantees tissue integrity. The progression of epileptic activity was evaluated in freely moving rats through electroencephalographic (EEG) recordings from cortex and amygdala, accompanied with synchronized video recordings. Moreover, we assessed the effectiveness of RK protocol and the establishment of epilepsy by evaluating cellular alterations of hippocampal slices from kindled rats. RK protocol induced convulsive states similar to SK protocols but in 3 days, with persistently lowered threshold to seizure induction and epileptogenic-dependent cellular changes in amygdala projection areas. We concluded that this novel RK protocol introduces a new variant of the chronic epileptogenesis models in freely moving rats, which is faster, highly reproducible and causes minimum cell damage with respect to that observed in other experimental models of epilepsy.
Razaque, Abdul; Elleithy, Khaled
2015-01-01
Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes. PMID:26153768
Jha, Ramesh K; Chakraborti, Subhendu; Kern, Theresa L; Fox, David T; Strauss, Charlie E M
2015-07-01
Structure-based rational mutagenesis for engineering protein functionality has been limited by the scarcity and difficulty of obtaining crystal structures of desired proteins. On the other hand, when high-throughput selection is possible, directed evolution-based approaches for gaining protein functionalities have been random and fortuitous with limited rationalization. We combine comparative modeling of dimer structures, ab initio loop reconstruction, and ligand docking to select positions for mutagenesis to create a library focused on the ligand-contacting residues. The rationally reduced library requirement enabled conservative control of the substitutions by oligonucleotide synthesis and bounding its size within practical transformation efficiencies (∼ 10(7) variants). This rational approach was successfully applied on an inducer-binding domain of an Acinetobacter transcription factor (TF), pobR, which shows high specificity for natural effector molecule, 4-hydroxy benzoate (4HB), but no native response to 3,4-dihydroxy benzoate (34DHB). Selection for mutants with high transcriptional induction by 34DHB was carried out at the single-cell level under flow cytometry (via green fluorescent protein expression under the control of pobR promoter). Critically, this selection protocol allows both selection for induction and rejection of constitutively active mutants. In addition to gain-of-function for 34DHB induction, the selected mutants also showed enhanced sensitivity and response for 4HB (native inducer) while no sensitivity was observed for a non-targeted but chemically similar molecule, 2-hydroxy benzoate (2HB). This is unique application of the Rosetta modeling protocols for library design to engineer a TF. Our approach extends applicability of the Rosetta redesign protocol into regimes without a priori precision structural information. © 2015 Wiley Periodicals, Inc.
Razaque, Abdul; Elleithy, Khaled
2015-07-06
Robust paradigms are a necessity, particularly for emerging wireless sensor network (WSN) applications. The lack of robust and efficient paradigms causes a reduction in the provision of quality of service (QoS) and additional energy consumption. In this paper, we introduce modular energy-efficient and robust paradigms that involve two archetypes: (1) the operational medium access control (O-MAC) hybrid protocol and (2) the pheromone termite (PT) model. The O-MAC protocol controls overhearing and congestion and increases the throughput, reduces the latency and extends the network lifetime. O-MAC uses an optimized data frame format that reduces the channel access time and provides faster data delivery over the medium. Furthermore, O-MAC uses a novel randomization function that avoids channel collisions. The PT model provides robust routing for single and multiple links and includes two new significant features: (1) determining the packet generation rate to avoid congestion and (2) pheromone sensitivity to determine the link capacity prior to sending the packets on each link. The state-of-the-art research in this work is based on improving both the QoS and energy efficiency. To determine the strength of O-MAC with the PT model; we have generated and simulated a disaster recovery scenario using a network simulator (ns-3.10) that monitors the activities of disaster recovery staff; hospital staff and disaster victims brought into the hospital. Moreover; the proposed paradigm can be used for general purpose applications. Finally; the QoS metrics of the O-MAC and PT paradigms are evaluated and compared with other known hybrid protocols involving the MAC and routing features. The simulation results indicate that O-MAC with PT produced better outcomes.
Effect of Mouse Strain in a Model of Chemical-induced Respiratory Allergy
Nishino, Risako; Fukuyama, Tomoki; Watanabe, Yuko; Kurosawa, Yoshimi; Ueda, Hideo; Kosaka, Tadashi
2014-01-01
The inhalation of many types of chemicals is a leading cause of allergic respiratory diseases, and effective protocols are needed for the detection of environmental chemical–related respiratory allergies. In our previous studies, we developed a method for detecting environmental chemical–related respiratory allergens by using a long-term sensitization–challenge protocol involving BALB/c mice. In the current study, we sought to improve our model by characterizing strain-associated differences in respiratory allergic reactions to the well-known chemical respiratory allergen glutaraldehyde (GA). According to our protocol, BALB/c, NC/Nga, C3H/HeN, C57BL/6N, and CBA/J mice were sensitized dermally with GA for 3 weeks and then challenged with intratracheal or inhaled GA at 2 weeks after the last sensitization. The day after the final challenge, all mice were euthanized, and total serum IgE levels were assayed. In addition, immunocyte counts, cytokine production, and chemokine levels in the hilar lymph nodes (LNs) and bronchoalveolar lavage fluids (BALF) were also assessed. In conclusion, BALB/c and NC/Nga mice demonstrated markedly increased IgE reactions. Inflammatory cell counts in BALF were increased in the treated groups of all strains, especially BALB/c, NC/Nga, and CBA/J strains. Cytokine levels in LNs were increased in all treated groups except for C3H/HeN and were particularly high in BALB/c and NC/Nga mice. According to our results, we suggest that BALB/c and NC/Nga are highly susceptible to respiratory allergic responses and therefore are good candidates for use in our model for detecting environmental chemical respiratory allergens. PMID:25048268
Lever, Teresa E.; Braun, Sabrina M.; Brooks, Ryan T.; Harris, Rebecca A.; Littrell, Loren L.; Neff, Ryan M.; Hinkel, Cameron J.; Allen, Mitchell J.; Ulsas, Mollie A.
2015-01-01
This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physiology. Elimination of one or more of these components will have a detrimental impact on the study results. Moreover, the energy level capability of the fluoroscopy system will determine which swallow parameters can be investigated. Most research centers have high energy fluoroscopes designed for use with people and larger animals, which results in exceptionally poor image quality when testing mice and other small rodents. Despite this limitation, we have identified seven VFSS parameters that are consistently quantifiable in mice when using a high energy fluoroscope in combination with the new murine VFSS protocol. We recently obtained a low energy fluoroscopy system with exceptionally high imaging resolution and magnification capabilities that was designed for use with mice and other small rodents. Preliminary work using this new system, in combination with the new murine VFSS protocol, has identified 13 swallow parameters that are consistently quantifiable in mice, which is nearly double the number obtained using conventional (i.e., high energy) fluoroscopes. Identification of additional swallow parameters is expected as we optimize the capabilities of this new system. Results thus far demonstrate the utility of using a low energy fluoroscopy system to detect and quantify subtle changes in swallow physiology that may otherwise be overlooked when using high energy fluoroscopes to investigate murine disease models. PMID:25866882
Effect of mouse strain in a model of chemical-induced respiratory allergy.
Nishino, Risako; Fukuyama, Tomoki; Watanabe, Yuko; Kurosawa, Yoshimi; Ueda, Hideo; Kosaka, Tadashi
2014-01-01
The inhalation of many types of chemicals is a leading cause of allergic respiratory diseases, and effective protocols are needed for the detection of environmental chemical-related respiratory allergies. In our previous studies, we developed a method for detecting environmental chemical-related respiratory allergens by using a long-term sensitization-challenge protocol involving BALB/c mice. In the current study, we sought to improve our model by characterizing strain-associated differences in respiratory allergic reactions to the well-known chemical respiratory allergen glutaraldehyde (GA). According to our protocol, BALB/c, NC/Nga, C3H/HeN, C57BL/6N, and CBA/J mice were sensitized dermally with GA for 3 weeks and then challenged with intratracheal or inhaled GA at 2 weeks after the last sensitization. The day after the final challenge, all mice were euthanized, and total serum IgE levels were assayed. In addition, immunocyte counts, cytokine production, and chemokine levels in the hilar lymph nodes (LNs) and bronchoalveolar lavage fluids (BALF) were also assessed. In conclusion, BALB/c and NC/Nga mice demonstrated markedly increased IgE reactions. Inflammatory cell counts in BALF were increased in the treated groups of all strains, especially BALB/c, NC/Nga, and CBA/J strains. Cytokine levels in LNs were increased in all treated groups except for C3H/HeN and were particularly high in BALB/c and NC/Nga mice. According to our results, we suggest that BALB/c and NC/Nga are highly susceptible to respiratory allergic responses and therefore are good candidates for use in our model for detecting environmental chemical respiratory allergens.
3D printing of versatile reactionware for chemical synthesis.
Kitson, Philip J; Glatzel, Stefan; Chen, Wei; Lin, Chang-Gen; Song, Yu-Fei; Cronin, Leroy
2016-05-01
In recent decades, 3D printing (also known as additive manufacturing) techniques have moved beyond their traditional applications in the fields of industrial manufacturing and prototyping to increasingly find roles in scientific research contexts, such as synthetic chemistry. We present a general approach for the production of bespoke chemical reactors, termed reactionware, using two different approaches to extrusion-based 3D printing. This protocol describes the printing of an inert polypropylene (PP) architecture with the concurrent printing of soft material catalyst composites, using two different 3D printer setups. The steps of the PROCEDURE describe the design and preparation of a 3D digital model of the desired reactionware device and the preparation of this model for use with fused deposition modeling (FDM) type 3D printers. The protocol then further describes the preparation of composite catalyst-silicone materials for incorporation into the 3D-printed device and the steps required to fabricate a reactionware device. This combined approach allows versatility in the design and use of reactionware based on the specific needs of the experimental user. To illustrate this, we present a detailed procedure for the production of one such reactionware device that will result in the production of a sealed reactor capable of effecting a multistep organic synthesis. Depending on the design time of the 3D model, and including time for curing and drying of materials, this procedure can be completed in ∼3 d.
NASA Astrophysics Data System (ADS)
Koran, John J., Jr.; Koran, Mary Lou
In a study designed to explore the effects of teacher anxiety and modeling on acquisition of a science teaching skill and concomitant student performance, 69 preservice secondary teachers and 295 eighth grade students were randomly assigned to microteaching sessions. Prior to microteaching, teachers were given an anxiety test, then randomly assigned to one of three treatments; a transcript model, a protocol model, or a control condition. Subsequently both teacher and student performance was assessed using written and behavioral measures. Analysis of variance indicated that subjects in the two modeling treatments significantly exceeded performance of control group subjects on all measures of the dependent variable, with the protocol model being generally superior to the transcript model. The differential effects of the modeling treatments were further reflected in student performance. Regression analysis of aptitude-treatment interactions indicated that teacher anxiety scores interacted significantly with instructional treatments, with high anxiety teachers performing best in the protocol modeling treatment. Again, this interaction was reflected in student performance, where students taught by highly anxious teachers performed significantly better when their teachers had received the protocol model. These results were discussed in terms of teacher concerns and a memory model of the effects of anxiety on performance.
Variability among electronic cigarettes in the pressure drop, airflow rate, and aerosol production.
Williams, Monique; Talbot, Prue
2011-12-01
This study investigated the performance of electronic cigarettes (e-cigarettes), compared different models within a brand, compared identical copies of the same model within a brand, and examined performance using different protocols. Airflow rate required to generate aerosol, pressure drop across e-cigarettes, and aerosol density were examined using three different protocols. First 10 puff protocol: The airflow rate required to produce aerosol and aerosol density varied among brands, while pressure drop varied among brands and between the same model within a brand. Total air hole area correlated with pressure drop for some brands. Smoke-out protocol: E-cigarettes within a brand generally performed similarly when puffed to exhaustion; however, there was considerable variation between brands in pressure drop, airflow rate required to produce aerosol, and the total number of puffs produced. With this protocol, aerosol density varied significantly between puffs and gradually declined. CONSECUTIVE TRIAL PROTOCOL: Two copies of one model were subjected to 11 puffs in three consecutive trials with breaks between trials. One copy performed similarly in each trial, while the second copy of the same model produced little aerosol during the third trial. The different performance properties of the two units were attributed to the atomizers. There was significant variability between and within brands in the airflow rate required to produce aerosol, pressure drop, length of time cartridges lasted, and production of aerosol. Variation in performance properties within brands suggests a need for better quality control during e-cigarette manufacture.
Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P
2018-05-10
This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Bravo, Rafael; Axelrod, David E
2013-11-18
Normal colon crypts consist of stem cells, proliferating cells, and differentiated cells. Abnormal rates of proliferation and differentiation can initiate colon cancer. We have measured the variation in the number of each of these cell types in multiple crypts in normal human biopsy specimens. This has provided the opportunity to produce a calibrated computational model that simulates cell dynamics in normal human crypts, and by changing model parameter values, to simulate the initiation and treatment of colon cancer. An agent-based model of stochastic cell dynamics in human colon crypts was developed in the multi-platform open-source application NetLogo. It was assumed that each cell's probability of proliferation and probability of death is determined by its position in two gradients along the crypt axis, a divide gradient and in a die gradient. A cell's type is not intrinsic, but rather is determined by its position in the divide gradient. Cell types are dynamic, plastic, and inter-convertible. Parameter values were determined for the shape of each of the gradients, and for a cell's response to the gradients. This was done by parameter sweeps that indicated the values that reproduced the measured number and variation of each cell type, and produced quasi-stationary stochastic dynamics. The behavior of the model was verified by its ability to reproduce the experimentally observed monocolonal conversion by neutral drift, the formation of adenomas resulting from mutations either at the top or bottom of the crypt, and by the robust ability of crypts to recover from perturbation by cytotoxic agents. One use of the virtual crypt model was demonstrated by evaluating different cancer chemotherapy and radiation scheduling protocols. A virtual crypt has been developed that simulates the quasi-stationary stochastic cell dynamics of normal human colon crypts. It is unique in that it has been calibrated with measurements of human biopsy specimens, and it can simulate the variation of cell types in addition to the average number of each cell type. The utility of the model was demonstrated with in silico experiments that evaluated cancer therapy protocols. The model is available for others to conduct additional experiments.
OVERVIEW OF UIN/CEC LRTAP PROTOCOLS ON POPS AND HEAVY METALS
The purpose of this workshop was to review the current state-of-the-science for persistent organic pollutants and heavy metal compounds, especially additional developments since the conclusion of the negotiations of the Protocols on these compounds under the Convention on Long Ra...
21 CFR 660.46 - Samples; protocols; official release.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Samples; protocols; official release. 660.46 Section 660.46 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Hepatitis B Surface...
Routing Protocols in Wireless Sensor Networks
Villalba, Luis Javier García; Orozco, Ana Lucila Sandoval; Cabrera, Alicia Triviño; Abbas, Cláudia Jacy Barenco
2009-01-01
The applications of wireless sensor networks comprise a wide variety of scenarios. In most of them, the network is composed of a significant number of nodes deployed in an extensive area in which not all nodes are directly connected. Then, the data exchange is supported by multihop communications. Routing protocols are in charge of discovering and maintaining the routes in the network. However, the appropriateness of a particular routing protocol mainly depends on the capabilities of the nodes and on the application requirements. This paper presents a review of the main routing protocols proposed for wireless sensor networks. Additionally, the paper includes the efforts carried out by Spanish universities on developing optimization techniques in the area of routing protocols for wireless sensor networks. PMID:22291515
Scarani, Valerio; Renner, Renato
2008-05-23
We derive a bound for the security of quantum key distribution with finite resources under one-way postprocessing, based on a definition of security that is composable and has an operational meaning. While our proof relies on the assumption of collective attacks, unconditional security follows immediately for standard protocols such as Bennett-Brassard 1984 and six-states protocol. For single-qubit implementations of such protocols, we find that the secret key rate becomes positive when at least N approximately 10(5) signals are exchanged and processed. For any other discrete-variable protocol, unconditional security can be obtained using the exponential de Finetti theorem, but the additional overhead leads to very pessimistic estimates.
Routing protocols in wireless sensor networks.
Villalba, Luis Javier García; Orozco, Ana Lucila Sandoval; Cabrera, Alicia Triviño; Abbas, Cláudia Jacy Barenco
2009-01-01
The applications of wireless sensor networks comprise a wide variety of scenarios. In most of them, the network is composed of a significant number of nodes deployed in an extensive area in which not all nodes are directly connected. Then, the data exchange is supported by multihop communications. Routing protocols are in charge of discovering and maintaining the routes in the network. However, the appropriateness of a particular routing protocol mainly depends on the capabilities of the nodes and on the application requirements. This paper presents a review of the main routing protocols proposed for wireless sensor networks. Additionally, the paper includes the efforts carried out by Spanish universities on developing optimization techniques in the area of routing protocols for wireless sensor networks.
Reliable WDM multicast in optical burst-switched networks
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun
2000-09-01
IN this paper,l we present a reliable WDM (Wavelength-Division Multiplexing) multicast protocol in optical burst-switched (OBS) networks. Since the burst dropping (loss) probability may be potentially high in a heavily loaded OBS backbone network, reliable multicast protocols that have developed for IP networks at the transport (or application) layer may incur heavy overheads such as a large number of duplicate retransmissions. In addition, it may take a longer time for an end host to detect and then recover from burst dropping (loss) occurred at the WDM layer. For efficiency reasons, we propose burst loss recovery within the OBS backbone (i.e., at the WDM link layer). The proposed protocol requires two additional functions to be performed by the WDM switch controller: subcasting and maintaining burst states, when the WDM switch has more than one downstream on the WDM multicast tree. We show that these additional functions are simple to implement and the overhead associated with them is manageable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, M Analisa; Uribe, Eva C; Sandoval, Marisa N
2009-01-01
In 2008 a joint team from Los Alamos National Laboratory (LANL) and Brookhaven National Laboratory (BNL) consisting of specialists in training of IAEA inspectors in the use of complementary access activities formulated a training program to prepare the U.S. Doe laboratories for the entry into force of the Additional Protocol. As a major part of the support of the activity, LANL summer interns provided open source information analysis to the LANL-BNL mock inspection team. They were a part of the Next Generation Safeguards Initiative's (NGSI) summer intern program aimed at producing the next generation of safeguards specialists. This paper describesmore » how they used open source information to 'backstop' the LANL-BNL team's effort to construct meaningful Additional Protocol Complementary Access training scenarios for each of the three DOE laboratories, Lawrence Livermore National Laboratory, Idaho National Laboratory, and Oak Ridge National Laboratory.« less
Design and Evaluation of Complex Moving HIFU Treatment Protocols
NASA Astrophysics Data System (ADS)
Kargl, Steven G.; Andrew, Marilee A.; Kaczkowski, Peter J.; Brayman, Andrew A.; Crum, Lawrence A.
2005-03-01
The use of moving high-intensity focused ultrasound (HIFU) treatment protocols is of interest in achieving efficient formation of large-volume thermal lesions in tissue. Judicious protocol design is critical in order to avoid collateral damage to healthy tissues outside the treatment zone. A KZK-BHTE model, extended to simulate multiple, moving scans in tissue, is used to investigate protocol design considerations. Prediction and experimental observations are presented which 1) validate the model, 2) illustrate how to assess the effects of acoustic nonlinearity, and 3) demonstrate how to assess and control collateral damage such as prefocal lesion formation and lesion formation resulting from thermal conduction without direct HIFU exposure. Experimental data consist of linear and circular scan protocols delivered over a range of exposure regimes in ex vivo bovine liver.
Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse
2012-05-01
Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modeling Structure and Dynamics of Protein Complexes with SAXS Profiles
Schneidman-Duhovny, Dina; Hammel, Michal
2018-01-01
Small-angle X-ray scattering (SAXS) is an increasingly common and useful technique for structural characterization of molecules in solution. A SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can be utilized in a variety of molecular modeling applications, such as comparing solution and crystal structures, structural characterization of flexible proteins, assembly of multi-protein complexes, and modeling of missing regions in the high-resolution structure. Here, we describe protocols for modeling atomic structures based on SAXS profiles. The first protocol is for comparing solution and crystal structures including modeling of missing regions and determination of the oligomeric state. The second protocol performs multi-state modeling by finding a set of conformations and their weights that fit the SAXS profile starting from a single-input structure. The third protocol is for protein-protein docking based on the SAXS profile of the complex. We describe the underlying software, followed by demonstrating their application on interleukin 33 (IL33) with its primary receptor ST2 and DNA ligase IV-XRCC4 complex. PMID:29605933
Protocol for Communication Networking for Formation Flying
NASA Technical Reports Server (NTRS)
Jennings, Esther; Okino, Clayton; Gao, Jay; Clare, Loren
2009-01-01
An application-layer protocol and a network architecture have been proposed for data communications among multiple autonomous spacecraft that are required to fly in a precise formation in order to perform scientific observations. The protocol could also be applied to other autonomous vehicles operating in formation, including robotic aircraft, robotic land vehicles, and robotic underwater vehicles. A group of spacecraft or other vehicles to which the protocol applies could be characterized as a precision-formation- flying (PFF) network, and each vehicle could be characterized as a node in the PFF network. In order to support precise formation flying, it would be necessary to establish a corresponding communication network, through which the vehicles could exchange position and orientation data and formation-control commands. The communication network must enable communication during early phases of a mission, when little positional knowledge is available. Particularly during early mission phases, the distances among vehicles may be so large that communication could be achieved only by relaying across multiple links. The large distances and need for omnidirectional coverage would limit communication links to operation at low bandwidth during these mission phases. Once the vehicles were in formation and distances were shorter, the communication network would be required to provide high-bandwidth, low-jitter service to support tight formation-control loops. The proposed protocol and architecture, intended to satisfy the aforementioned and other requirements, are based on a standard layered-reference-model concept. The proposed application protocol would be used in conjunction with conventional network, data-link, and physical-layer protocols. The proposed protocol includes the ubiquitous Institute of Electrical and Electronics Engineers (IEEE) 802.11 medium access control (MAC) protocol to be used in the datalink layer. In addition to its widespread and proven use in diverse local-area networks, this protocol offers both (1) a random- access mode needed for the early PFF deployment phase and (2) a time-bounded-services mode needed during PFF-maintenance operations. Switching between these two modes could be controlled by upper-layer entities using standard link-management mechanisms. Because the early deployment phase of a PFF mission can be expected to involve multihop relaying to achieve network connectivity (see figure), the proposed protocol includes the open shortest path first (OSPF) network protocol that is commonly used in the Internet. Each spacecraft in a PFF network would be in one of seven distinct states as the mission evolved from initial deployment, through coarse formation, and into precise formation. Reconfiguration of the formation to perform different scientific observations would also cause state changes among the network nodes. The application protocol provides for recognition and tracking of the seven states for each node and for protocol changes under specified conditions to adapt the network and satisfy communication requirements associated with the current PFF mission phase. Except during early deployment, when peer-to-peer random access discovery methods would be used, the application protocol provides for operation in a centralized manner.
Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol
NASA Technical Reports Server (NTRS)
Huang, Xiaowan; Singh, Anu; Smolka, Scott A.
2010-01-01
We use the UPPAAL model checker for Timed Automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN). The TPSN protocol seeks to provide network-wide synchronization of the distributed clocks in a sensor network. Clock-synchronization algorithms for sensor networks such as TPSN must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of Timed Automata. To overcome this formal-modeling obstacle, we augment the UPPAAL specification language with the integer clock derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN protocol. With this integer-clock-based model of TPSN in hand, we use UPPAAL to verify that the protocol achieves network-wide time synchronization and is devoid of deadlock. We also use the UPPAAL Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution
MR efficiency using automated MRI-desktop eProtocol
NASA Astrophysics Data System (ADS)
Gao, Fei; Xu, Yanzhe; Panda, Anshuman; Zhang, Min; Hanson, James; Su, Congzhe; Wu, Teresa; Pavlicek, William; James, Judy R.
2017-03-01
MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort. The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and Interventional Radiology and different vendors for imaging protocol management.
Field validation of protocols developed to evaluate in-line mastitis detection systems.
Kamphuis, C; Dela Rue, B T; Eastwood, C R
2016-02-01
This paper reports on a field validation of previously developed protocols for evaluating the performance of in-line mastitis-detection systems. The protocols outlined 2 requirements of these systems: (1) to detect cows with clinical mastitis (CM) promptly and accurately to enable timely and appropriate treatment and (2) to identify cows with high somatic cell count (SCC) to manage bulk milk SCC levels. Gold standard measures, evaluation tests, performance measures, and performance targets were proposed. The current study validated the protocols on commercial dairy farms with automated in-line mastitis-detection systems using both electrical conductivity (EC) and SCC sensor systems that both monitor at whole-udder level. The protocol for requirement 1 was applied on 3 commercial farms. For requirement 2, the protocol was applied on 6 farms; 3 of them had low bulk milk SCC (128×10(3) cells/mL) and were the same farms as used for field evaluation of requirement 1. Three farms with high bulk milk SCC (270×10(3) cells/mL) were additionally enrolled. The field evaluation methodology and results were presented at a workshop including representation from 7 international suppliers of in-line mastitis-detection systems. Feedback was sought on the acceptance of standardized performance evaluation protocols and recommended refinements to the protocols. Although the methodology for requirement 1 was relatively labor intensive and required organizational skills over an extended period, no major issues were encountered during the field validation of both protocols. The validation, thus, proved the protocols to be practical. Also, no changes to the data collection process were recommended by the technology supplier representatives. However, 4 recommendations were made to refine the protocols: inclusion of an additional analysis that ignores small (low-density) clot observations in the definition of CM, extension of the time window from 4 to 5 milkings for timely alerts for CM, setting a maximum number of 10 milkings for the time window to detect a CM episode, and presentation of sensitivity for a larger range of false alerts per 1,000 milkings replacing minimum performance targets. The recommended refinements are discussed with suggested changes to the original protocols. The information presented is intended to inform further debate toward achieving international agreement on standard protocols to evaluate performance of in-line mastitis-detection systems. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Isolated cleft palate requires different surgical protocols depending on cleft type.
Elander, Anna; Persson, Christina; Lilja, Jan; Mark, Hans
2017-08-01
A staged protocol for isolated cleft palate (CPO), comprising the early repair of the soft palate at 6 months and delayed repair of the eventual cleft in the hard palate until 4 years, designed to improve maxillary growth, was introduced. CPO is frequently associated with additional congenital conditions. The study evaluates this surgical protocol for clefts in the soft palate (CPS) and for clefts in the hard and soft palate (CPH), with or without additional malformation, regarding primary and secondary surgical interventions needed for cleft closure and for correction of velopharyngeal insufficiency until 10 years of age. Of 94 consecutive children with CPO, divided into four groups with (+) or without (-) additional malformations (CPS + or CPS - and CPH + or CPH-), hard palate repair was required in 53%, performed with small local flaps in 21% and with bilateral mucoperiosteal flaps in 32%. The total incidence of soft palate re-repair was 2% and the fistula repair of the hard palate was 5%. The total incidence of secondary velopharyngeal surgery was 17% until 10 years, varying from 0% for CPS - and 15% for CPH-, to 28% for CPS + and 30% for CPH+. The described staged protocol for repair of CPO is found to be safe in terms of perioperative surgical results, with comparatively low need for secondary interventions. Furthermore, the study indicates that the presence of a cleft in the hard palate and/or additional conditions have a negative impact on the development of the velopharyngeal function.
U.S. Additional Protocol Outreach Program-Tabletop Exercises to Implement the AP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langner, D. C.; Thomas, K. E.; Smith, M. K.
2005-01-01
The Office of International Regimes and Agreement (NA-243) is the lead office in the Department of Energy (DOE) to assist DOE and National Nuclear Security Administration (NNSA) sites in the preparation of providing declarations on relevant civilian, nuclear fuel cycle-related research and development activities to the International Atomic Energy Agency (IAEA). This is in accordance to the implementation of the ''Protocol Additional to the AGreement between the United STates and the International Atomic Energy Agency for the Applications of Safeguards in the United States. In preparation for entry-into-force, NA-243 conducted two tabletop exercises under the Additional Protocol Outreach Program. Themore » first one, held in May 2004 at Los Alamos National Laboratory, focused on the factors important to protect national security assets and intellectual property. The other, held in August 2004 at the Idaho National Laboratory explored the level of detail or granularity for reporting declarable activities. Both tabletops invited participants from the national laboratories and DOE/NNSA organizations. Discussions were based around the process to identify potential declarable activities relating to the nuclear fuel cycle-related R and D projects from the Advanced Fuel Cycle Initiative program. The two tabletop exercises provided recommendations and conclusions that would be helpful to other DOE/NNSA locations for preparing for and reporting relevant and concise information to the IAEA under the Additional Protocol. This paper provides details on the events, discussions, observations, and lessons learned from both the LANL and INL tabletop exercises.« less
A comprehensive constitutive law for waxy crude oil: a thixotropic yield stress fluid.
Dimitriou, Christopher J; McKinley, Gareth H
2014-09-21
Guided by a series of discriminating rheometric tests, we develop a new constitutive model that can quantitatively predict the key rheological features of waxy crude oils. We first develop a series of model crude oils, which are characterized by a complex thixotropic and yielding behavior that strongly depends on the shear history of the sample. We then outline the development of an appropriate preparation protocol for carrying out rheological measurements, to ensure consistent and reproducible initial conditions. We use RheoPIV measurements of the local kinematics within the fluid under imposed deformations in order to validate the selection of a particular protocol. Velocimetric measurements are also used to document the presence of material instabilities within the model crude oil under conditions of imposed steady shearing. These instabilities are a result of the underlying non-monotonic steady flow curve of the material. Three distinct deformation histories are then used to probe the material's constitutive response. These deformations are steady shear, transient response to startup of steady shear with different aging times, and large amplitude oscillatory shear (LAOS). The material response to these three different flows is used to motivate the development of an appropriate constitutive model. This model (termed the IKH model) is based on a framework adopted from plasticity theory and implements an additive strain decomposition into characteristic reversible (elastic) and irreversible (plastic) contributions, coupled with the physical processes of isotropic and kinematic hardening. Comparisons of experimental to simulated response for all three flows show good quantitative agreement, validating the chosen approach for developing constitutive models for this class of materials.
ERIC Educational Resources Information Center
Aagaard, James S.; And Others
This two-volume document specifies a protocol that was developed using the Reference Model for Open Systems Interconnection (OSI), which provides a framework for communications within a heterogeneous network environment. The protocol implements the features necessary for bibliographic searching, record maintenance, and mail transfer between…
Benson, James D; Benson, Charles T; Critser, John K
2014-08-01
Optimization of cryopreservation protocols for cells and tissues requires accurate models of heat and mass transport. Model selection often depends on the configuration of the tissue. Here, a mathematical and conceptual model of water and solute transport for whole hamster pancreatic islets has been developed and experimentally validated incorporating fundamental biophysical data from previous studies on individual hamster islet cells while retaining whole-islet structural information. It describes coupled transport of water and solutes through the islet by three methods: intracellularly, intercellularly, and in combination. In particular we use domain decomposition techniques to couple a transmembrane flux model with an interstitial mass transfer model. The only significant undetermined variable is the cellular surface area which is in contact with the intercellularly transported solutes, Ais. The model was validated and Ais determined using a 3×3 factorial experimental design blocked for experimental day. Whole islet physical experiments were compared with model predictions at three temperatures, three perfusing solutions, and three islet size groups. A mean of 4.4 islets were compared at each of the 27 experimental conditions and found to correlate with a coefficient of determination of 0.87±0.06 (mean ± SD). Only the treatment variable of perfusing solution was found to be significant (p<0.05). We have devised a model that retains much of the intrinsic geometric configuration of the system, and thus fewer laboratory experiments are needed to determine model parameters and thus to develop new optimized cryopreservation protocols. Additionally, extensions to ovarian follicles and other concentric tissue structures may be made. Copyright © 2014 Elsevier Inc. All rights reserved.
Guedes-da-Silva, F H; Batista, D G J; da Silva, C F; Meuser, M B; Simões-Silva, M R; de Araújo, J S; Ferreira, C G; Moreira, O C; Britto, C; Lepesheva, G I; Soeiro, Maria de Nazaré C
2015-12-01
The lack of translation between preclinical assays and clinical trials for novel therapies for Chagas disease (CD) indicates a need for more feasible and standardized protocols and experimental models. Here, we investigated the effects of treatment with benznidazole (Bz) and with the potent experimental T. cruzi CYP51 inhibitor VNI in mouse models of Chagas disease by using different animal genders and parasite strains and employing distinct types of therapeutic schemes. Our findings confirm that female mice are less vulnerable to the infection than males, show that male models are less susceptible to treatment with both Bz and VNI, and thus suggest that male models are much more suitable for selection of the most promising antichagasic agents. Additionally, we have found that preventive protocols (compound given at 1 dpi) result in higher treatment success rates, which also should be avoided during advanced steps of in vivo trials of novel anti-T. cruzi drug candidates. Another consideration is the relevance of immunosuppression methods in order to verify the therapeutic profile of novel compounds, besides the usefulness of molecular diagnostic tools (quantitative PCR) to ascertain compound efficacy in experimental animals. Our study aims to contribute to the development of more reliable methods and decision gates for in vivo assays of novel antiparasitic compounds in order to move them from preclinical to clinical trials for CD. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Baumgarten, Keith M; Oliver, Harvey A; Foley, Jack; Chen, Ding-Geng; Autenried, Peter; Duan, Shanzhong; Heiser, Patrick
2013-05-01
There have been few scientific studies that have examined usage of human growth hormone to accelerate recovery from injury. The hypothesis of this study was that human growth hormone would accelerate tendon-to-bone healing compared with control animals treated with placebo in a rat model of acute rotator cuff injury repair. Seventy-two rats underwent repair of acute rotator cuff injuries and were randomized into the following postoperative dosing regimens: placebo, and human growth hormone at 0.1, 1, 2, 5, and 10 mg/kg/day, administered subcutaneously once per day for fourteen days (Protocol 1). An additional twenty-four rats were randomized to receive either (1) placebo or (2) human growth hormone at 5 mg/kg, administered subcutaneously twice per day for seven days preoperatively and twenty-eight days postoperatively (Protocol 2). All rats were killed twenty-eight days postoperatively. Mechanical testing was performed. Ultimate stress, ultimate force, stiffness, energy to failure, and ultimate distension were determined. For Protocol 1, analysis of variance testing showed no significant difference between the groups with regard to ultimate stress, ultimate force, stiffness, energy to failure, or ultimate distension. In Protocol 2, ultimate force to failure was significantly worse in the human growth hormone group compared with the placebo group (21.1 ± 5.85 versus 26.3 ± 5.47 N; p = 0.035). Failure was more likely to occur through the bone than the tendon-bone interface in the human growth hormone group compared with the placebo group (p = 0.001). No significant difference was found for ultimate stress, ultimate force, stiffness, energy to failure, or ultimate distension between the groups in Protocol 2. In this rat model of acute tendon-bone injury repair, daily subcutaneous postoperative human growth hormone treatment for fourteen days failed to demonstrate a significant difference in any biomechanical parameter compared with placebo. Furthermore, subcutaneous administration of 5 mg/kg of human growth hormone twice daily from seven days preoperatively until twenty-eight days postoperatively demonstrated lower loads to ultimate failure and a higher risk of bone fracture failure compared with placebo.
NASA Astrophysics Data System (ADS)
Phister, P. W., Jr.
1983-12-01
Development of the Air Force Institute of Technology's Digital Engineering Laboratory Network (DELNET) was continued with the development of an initial draft of a protocol standard for all seven layers as specified by the International Standards Organization's (ISO) Reference Model for Open Systems Interconnections. This effort centered on the restructuring of the Network Layer to perform Datagram routing and to conform to the developed protocol standards and actual software module development of the upper four protocol layers residing within the DELNET Monitor (Zilog MCZ 1/25 Computer System). Within the guidelines of the ISO Reference Model the Transport Layer was developed utilizing the Internet Header Format (IHF) combined with the Transport Control Protocol (TCP) to create a 128-byte Datagram. Also a limited Application Layer was created to pass the Gettysburg Address through the DELNET. This study formulated a first draft for the DELNET Protocol Standard and designed, implemented, and tested the Network, Transport, and Application Layers to conform to these protocol standards.
Takahata, Kaori; Horiuchi, Shigeko; Tadokoro, Yuriko; Shuo, Takuya; Sawano, Erika; Shinohara, Kazuyuki
2018-01-01
This preliminary study aimed to 1) determine changes in the salivary oxytocin (OT) level during breast stimulation for promoting the spontaneous onset of labor in low-risk term pregnancies, and 2) clarify the feasibility of the breast stimulation intervention protocol in terms of practicality and acceptability. We used a single arm trial design. Sixteen low-risk pregnant women between 38 and 40 weeks of gestation with cephalic presentation participated. They performed breast stimulation for 3 days with an attendant midwife in a single maternity hospital. Each breast was stimulated for 15 minutes for a total of 1 hour per day. Saliva was collected 10 minutes before the intervention and 15, 30, 60, 75, and 90 minutes after the intervention, yielding 18 samples per woman. Among a total of 282 saliva samples from the 16 participants, OT level was measured in 142 samples (missing rate: 49.6%). The median OT level showed the highest values on day 3 of the breast stimulation, with a marked increase 30 min after the intervention. In the mixed models after multiple imputation for missing data, the OT level on the first day of intervention was significantly lower than that on the third day of intervention. Fatigue from breast stimulation decreased on subsequent days, and most of the women (75%) felt no discomfort with the protocol. Uterine hyperstimulation was not observed. Following a 3-day breast stimulation protocol for spontaneous onset of labor, the mean OT level showed the highest values on day 3. The breast stimulation intervention protocol showed good feasibility in terms of practicality and acceptability among the pregnant women. Additional large-scale studies are warranted to confirm the protocol's effectiveness.
Kolesová, Hana; Čapek, Martin; Radochová, Barbora; Janáček, Jiří; Sedmera, David
2016-08-01
Our goal was to find an optimal tissue clearing protocol for whole-mount imaging of embryonic and adult hearts and whole embryos of transgenic mice that would preserve green fluorescent protein GFP fluorescence and permit comparison of different currently available 3D imaging modalities. We tested various published organic solvent- or water-based clearing protocols intended to preserve GFP fluorescence in central nervous system: tetrahydrofuran dehydration and dibenzylether protocol (DBE), SCALE, CLARITY, and CUBIC and evaluated their ability to render hearts and whole embryos transparent. DBE clearing protocol did not preserve GFP fluorescence; in addition, DBE caused considerable tissue-shrinking artifacts compared to the gold standard BABB protocol. The CLARITY method considerably improved tissue transparency at later stages, but also decreased GFP fluorescence intensity. The SCALE clearing resulted in sufficient tissue transparency up to ED12.5; at later stages the useful depth of imaging was limited by tissue light scattering. The best method for the cardiac specimens proved to be the CUBIC protocol, which preserved GFP fluorescence well, and cleared the specimens sufficiently even at the adult stages. In addition, CUBIC decolorized the blood and myocardium by removing tissue iron. Good 3D renderings of whole fetal hearts and embryos were obtained with optical projection tomography and selective plane illumination microscopy, although at resolutions lower than with a confocal microscope. Comparison of five tissue clearing protocols and three imaging methods for study of GFP mouse embryos and hearts shows that the optimal method depends on stage and level of detail required.
Novel monoclonal antibodies to study tissue regeneration in planarians.
Ross, Kelly G; Omuro, Kerilyn C; Taylor, Matthew R; Munday, Roma K; Hubert, Amy; King, Ryan S; Zayas, Ricardo M
2015-01-21
Planarians are an attractive model organism for studying stem cell-based regeneration due to their ability to replace all of their tissues from a population of adult stem cells. The molecular toolkit for planarian studies currently includes the ability to study gene function using RNA interference (RNAi) and observe gene expression via in situ hybridizations. However, there are few antibodies available to visualize protein expression, which would greatly enhance analysis of RNAi experiments as well as allow further characterization of planarian cell populations using immunocytochemistry and other immunological techniques. Thus, additional, easy-to-use, and widely available monoclonal antibodies would be advantageous to study regeneration in planarians. We have created seven monoclonal antibodies by inoculating mice with formaldehyde-fixed cells isolated from dissociated 3-day regeneration blastemas. These monoclonal antibodies can be used to label muscle fibers, axonal projections in the central and peripheral nervous systems, two populations of intestinal cells, ciliated cells, a subset of neoblast progeny, and discrete cells within the central nervous system as well as the regeneration blastema. We have tested these antibodies using eight variations of a formaldehyde-based fixation protocol and determined reliable protocols for immunolabeling whole planarians with each antibody. We found that labeling efficiency for each antibody varies greatly depending on the addition or removal of tissue processing steps that are used for in situ hybridization or immunolabeling techniques. Our experiments show that a subset of the antibodies can be used alongside markers commonly used in planarian research, including anti-SYNAPSIN and anti-SMEDWI, or following whole-mount in situ hybridization experiments. The monoclonal antibodies described in this paper will be a valuable resource for planarian research. These antibodies have the potential to be used to better understand planarian biology and to characterize phenotypes following RNAi experiments. In addition, we present alterations to fixation protocols and demonstrate how these changes can increase the labeling efficiencies of antibodies used to stain whole planarians.
In vitro skin models and tissue engineering protocols for skin graft applications.
Naves, Lucas B; Dhand, Chetna; Almeida, Luis; Rajamani, Lakshminarayanan; Ramakrishna, Seeram
2016-11-30
In this review, we present a brief introduction of the skin structure, a concise compilation of skin-related disorders, and a thorough discussion of different in vitro skin models, artificial skin substitutes, skin grafts, and dermal tissue engineering protocols. The advantages of the development of in vitro skin disorder models, such as UV radiation and the prototype model, melanoma model, wound healing model, psoriasis model, and full-thickness model are also discussed. Different types of skin grafts including allografts, autografts, allogeneic, and xenogeneic are described in detail with their associated applications. We also discuss different tissue engineering protocols for the design of various types of skin substitutes and their commercial outcomes. Brief highlights are given of the new generation three-dimensional printed scaffolds for tissue regeneration applications. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Weidner, Christopher; Steinfath, Matthias; Wistorf, Elisa; Oelgeschläger, Michael; Schneider, Marlon R; Schönfelder, Gilbert
2017-08-16
Recent studies that compared transcriptomic datasets of human diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. A major reason for the discrepancies between different gene expression analyses is the arbitrary filtering of differentially expressed genes. Furthermore, the comparison of single genes between different species and platforms often is limited by technical variance, leading to misinterpretation of the con/discordance between data from human and animal models. Thus, standardized approaches for systematic data analysis are needed. To overcome subjective gene filtering and ineffective gene-to-gene comparisons, we recently demonstrated that gene set enrichment analysis (GSEA) has the potential to avoid these problems. Therefore, we developed a standardized protocol for the use of GSEA to distinguish between appropriate and inappropriate animal models for translational research. This protocol is not suitable to predict how to design new model systems a-priori, as it requires existing experimental omics data. However, the protocol describes how to interpret existing data in a standardized manner in order to select the most suitable animal model, thus avoiding unnecessary animal experiments and misleading translational studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Universal Common Communication Substrate (UCCS) is a low-level communication substrate that exposes high-performance communication primitives, while providing network interoperability. It is intended to support multiple upper layer protocol (ULPs) or programming models including SHMEM,UPC,Titanium,Co-Array Fortran,Global Arrays,MPI,GASNet, and File I/O. it provides various communication operations including one-sided and two-sided point-to-point, collectives, and remote atomic operations. In addition to operations for ULPs, it provides an out-of-band communication channel required typically required to wire-up communication libraries.
Protocols for the delivery of small molecules to the two-spotted spider mite, Tetranychus urticae
Nunes, Maria Andreia; Zhurov, Vladimir; Dermauw, Wannes; Osakabe, Masahiro; Van Leeuwen, Thomas; Grbic, Miodrag
2017-01-01
The two-spotted spider mite, Tetranychus urticae, is a chelicerate herbivore with an extremely wide host range and an extraordinary ability to develop pesticide resistance. Due to its responsiveness to natural and synthetic xenobiotics, the spider mite is becoming a prime pest herbivore model for studies of the evolution of host range, plant-herbivore interactions and mechanisms of xenobiotic resistance. The spider mite genome has been sequenced and its transcriptional responses to developmental and various biotic and abiotic cues have been documented. However, to identify biological and evolutionary roles of T. urticae genes and proteins, it is necessary to develop methods for the efficient manipulation of mite gene function or protein activity. Here, we describe protocols developed for the delivery of small molecules into spider mites. Starting with mite maintenance and the preparation of the experimental mite populations of developmentally synchronized larvae and adults, we describe 3 methods for delivery of small molecules including artificial diet, leaf coating, and soaking. The presented results define critical steps in these methods and demonstrate that they can successfully deliver tracer dyes into mites. Described protocols provide guidelines for high-throughput setups for delivery of experimental compounds that could be used in reverse genetics platforms to modulate gene expression or protein activity, or for screens focused on discovery of new molecules for mite control. In addition, described protocols could be adapted for other Tetranychidae and related species of economic importance such as Varroa, dust and poultry mites. PMID:28686745
DEVELOPMENT OF A RATIONALLY BASED DESIGN PROTOCOL FOR THE ULTRAVIOLET LIGHT DISINFECTION PROCESS
A protocol is demonstrated for the design and evaluation of ultraviolet (UV) disinfection systems based on a mathematical model. The disinfection model incorporates the system's physical dimensions, the residence time distribution of the reactor and dispersion characteristics, th...
Design and Verification of a Distributed Communication Protocol
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.
Cryptanalysis and improvement of a quantum communication-based online shopping mechanism
NASA Astrophysics Data System (ADS)
Huang, Wei; Yang, Ying-Hui; Jia, Heng-Yue
2015-06-01
Recently, Chou et al. (Electron Commer Res 14:349-367, 2014) presented a novel controlled quantum secure direct communication protocol which can be used for online shopping. The authors claimed that their protocol was immune to the attacks from both external eavesdropper and internal betrayer. However, we find that this protocol is vulnerable to the attack from internal betrayer. In this paper, we analyze the security of this protocol to show that the controller in this protocol is able to eavesdrop the secret information of the sender (i.e., the customer's shopping information), which indicates that it cannot be used for secure online shopping as the authors expected. Accordingly, an improvement of this protocol, which could resist the controller's attack, is proposed. In addition, we present another protocol which is more appropriate for online shopping. Finally, a discussion about the difference in detail of the quantum secure direct communication process between regular quantum communications and online shopping is given.
In silico simulations of experimental protocols for cardiac modeling.
Carro, Jesus; Rodriguez, Jose Felix; Pueyo, Esther
2014-01-01
A mathematical model of the AP involves the sum of different transmembrane ionic currents and the balance of intracellular ionic concentrations. To each ionic current corresponds an equation involving several effects. There are a number of model parameters that must be identified using specific experimental protocols in which the effects are considered as independent. However, when the model complexity grows, the interaction between effects becomes increasingly important. Therefore, model parameters identified considering the different effects as independent might be misleading. In this work, a novel methodology consisting in performing in silico simulations of the experimental protocol and then comparing experimental and simulated outcomes is proposed for parameter model identification and validation. The potential of the methodology is demonstrated by validating voltage-dependent L-type calcium current (ICaL) inactivation in recently proposed human ventricular AP models with different formulations. Our results show large differences between ICaL inactivation as calculated from the model equation and ICaL inactivation from the in silico simulations due to the interaction between effects and/or to the experimental protocol. Our results suggest that, when proposing any new model formulation, consistency between such formulation and the corresponding experimental data that is aimed at being reproduced needs to be first verified considering all involved factors.
Chaimani, Anna; Caldwell, Deborah M; Li, Tianjing; Higgins, Julian P T; Salanti, Georgia
2017-03-01
The number of systematic reviews that aim to compare multiple interventions using network meta-analysis is increasing. In this study, we highlight aspects of a standard systematic review protocol that may need modification when multiple interventions are to be compared. We take the protocol format suggested by Cochrane for a standard systematic review as our reference and compare the considerations for a pairwise review with those required for a valid comparison of multiple interventions. We suggest new sections for protocols of systematic reviews including network meta-analyses with a focus on how to evaluate their assumptions. We provide example text from published protocols to exemplify the considerations. Standard systematic review protocols for pairwise meta-analyses need extensions to accommodate the increased complexity of network meta-analysis. Our suggested modifications are widely applicable to both Cochrane and non-Cochrane systematic reviews involving network meta-analyses. Copyright © 2017 Elsevier Inc. All rights reserved.
The importance of the Montreal Protocol in protecting climate.
Velders, Guus J M; Andersen, Stephen O; Daniel, John S; Fahey, David W; McFarland, Mack
2007-03-20
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer is a landmark agreement that has successfully reduced the global production, consumption, and emissions of ozone-depleting substances (ODSs). ODSs are also greenhouse gases that contribute to the radiative forcing of climate change. Using historical ODSs emissions and scenarios of potential emissions, we show that the ODS contribution to radiative forcing most likely would have been much larger if the ODS link to stratospheric ozone depletion had not been recognized in 1974 and followed by a series of regulations. The climate protection already achieved by the Montreal Protocol alone is far larger than the reduction target of the first commitment period of the Kyoto Protocol. Additional climate benefits that are significant compared with the Kyoto Protocol reduction target could be achieved by actions under the Montreal Protocol, by managing the emissions of substitute fluorocarbon gases and/or implementing alternative gases with lower global warming potentials.
The importance of the Montreal Protocol in protecting climate
Velders, Guus J. M.; Andersen, Stephen O.; Daniel, John S.; Fahey, David W.; McFarland, Mack
2007-01-01
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer is a landmark agreement that has successfully reduced the global production, consumption, and emissions of ozone-depleting substances (ODSs). ODSs are also greenhouse gases that contribute to the radiative forcing of climate change. Using historical ODSs emissions and scenarios of potential emissions, we show that the ODS contribution to radiative forcing most likely would have been much larger if the ODS link to stratospheric ozone depletion had not been recognized in 1974 and followed by a series of regulations. The climate protection already achieved by the Montreal Protocol alone is far larger than the reduction target of the first commitment period of the Kyoto Protocol. Additional climate benefits that are significant compared with the Kyoto Protocol reduction target could be achieved by actions under the Montreal Protocol, by managing the emissions of substitute fluorocarbon gases and/or implementing alternative gases with lower global warming potentials. PMID:17360370
Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Crawford, Aladsair J.; Fuller, Jason C.
The Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems (PNNL-22010) was first issued in November 2012 as a first step toward providing a foundational basis for developing an initial standard for the uniform measurement and expression of energy storage system (ESS) performance. Based on experiences with the application and use of that document, and to include additional ESS applications and associated duty cycles, test procedures and performance metrics, a first revision of the November 2012 Protocol was issued in June 2014 (PNNL 22010 Rev. 1). As an update of the 2014 revision 1 to the Protocol,more » this document (the March 2016 revision 2 to the Protocol) is intended to supersede the June 2014 revision 1 to the Protocol and provide a more user-friendly yet more robust and comprehensive basis for measuring and expressing ESS performance.« less
Point-to-Point Multicast Communications Protocol
NASA Technical Reports Server (NTRS)
Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.
1987-01-01
This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.
Ruseva, T; Marland, E; Szymanski, C; Hoyle, J; Marland, G; Kowalczyk, T
2017-08-01
A key component of California's cap-and-trade program is the use of carbon offsets as compliance instruments for reducing statewide GHG emissions. Under this program, offsets are tradable credits representing real, verifiable, quantifiable, enforceable, permanent, and additional reductions or removals of GHG emissions. This paper focuses on the permanence and additionality standards for offset credits as defined and operationalized in California's Compliance Offset Protocol for U.S. Forest Projects. Drawing on a review of the protocol, interviews, current offset projects, and existing literature, we discuss how additionality and permanence standards relate to project participation and overall program effectiveness. Specifically, we provide an overview of offset credits as compliance instruments in California's cap-and-trade program, the timeline for a forest offset project, and the factors shaping participation in offset projects. We then discuss the implications of permanence and additionality at both the project and program levels. Largely consistent with previous work, we find that stringent standards for permanent and additional project activities can present barriers to participation, but also, that there may be a trade-off between project quality and quantity (i.e. levels of participation) when considering overall program effectiveness. We summarize what this implies for California's forest offset program and provide suggestions for improvements in light of potential program diffusion and policy learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Michino, Mayako; Chen, Jianhan; Stevens, Raymond C; Brooks, Charles L
2010-08-01
Building reliable structural models of G protein-coupled receptors (GPCRs) is a difficult task because of the paucity of suitable templates, low sequence identity, and the wide variety of ligand specificities within the superfamily. Template-based modeling is known to be the most successful method for protein structure prediction. However, refinement of homology models within 1-3 A C alpha RMSD of the native structure remains a major challenge. Here, we address this problem by developing a novel protocol (foldGPCR) for modeling the transmembrane (TM) region of GPCRs in complex with a ligand, aimed to accurately model the structural divergence between the template and target in the TM helices. The protocol is based on predicted conserved inter-residue contacts between the template and target, and exploits an all-atom implicit membrane force field. The placement of the ligand in the binding pocket is guided by biochemical data. The foldGPCR protocol is implemented by a stepwise hierarchical approach, in which the TM helical bundle and the ligand are assembled by simulated annealing trials in the first step, and the receptor-ligand complex is refined with replica exchange sampling in the second step. The protocol is applied to model the human beta(2)-adrenergic receptor (beta(2)AR) bound to carazolol, using contacts derived from the template structure of bovine rhodopsin. Comparison with the X-ray crystal structure of the beta(2)AR shows that our protocol is particularly successful in accurately capturing helix backbone irregularities and helix-helix packing interactions that distinguish rhodopsin from beta(2)AR. (c) 2010 Wiley-Liss, Inc.
Fortified Anonymous Communication Protocol for Location Privacy in WSN: A Modular Approach
Abuzneid, Abdel-Shakour; Sobh, Tarek; Faezipour, Miad; Mahmood, Ausif; James, John
2015-01-01
Wireless sensor network (WSN) consists of many hosts called sensors. These sensors can sense a phenomenon (motion, temperature, humidity, average, max, min, etc.) and represent what they sense in a form of data. There are many applications for WSNs including object tracking and monitoring where in most of the cases these objects need protection. In these applications, data privacy itself might not be as important as the privacy of source location. In addition to the source location privacy, sink location privacy should also be provided. Providing an efficient end-to-end privacy solution would be a challenging task to achieve due to the open nature of the WSN. The key schemes needed for end-to-end location privacy are anonymity, observability, capture likelihood, and safety period. We extend this work to allow for countermeasures against multi-local and global adversaries. We present a network model protected against a sophisticated threat model: passive /active and local/multi-local/global attacks. This work provides a solution for end-to-end anonymity and location privacy as well. We will introduce a framework called fortified anonymous communication (FAC) protocol for WSN. PMID:25763649
Culturing primary mouse pancreatic ductal cells.
Reichert, Maximilian; Rhim, Andrew D; Rustgi, Anil K
2015-06-01
The most common subtype of pancreatic cancer is pancreatic ductal adenocarcinoma (PDAC). PDAC resembles ductal cells morphologically. To study pancreatic ductal cell (PDC) and pancreatic intraepithelial neoplasia (PanIN)/PDAC biology, it is essential to have reliable in vitro culture conditions. Here we describe a methodology to isolate, culture, and passage PDCs and duct-like cells from the mouse pancreas. It can be used to isolate cells from genetically engineered mouse models (GEMMs), providing a valuable tool to study disease models in vitro to complement in vivo findings. The culture conditions allow epithelial cells to outgrow fibroblast and other "contaminating" cell types within a few passages. However, the resulting cultures, although mostly epithelial, are not completely devoid of fibroblasts. Regardless, this protocol provides guidelines for a robust in vitro culture system to isolate, maintain, and expand primary pancreatic ductal epithelial cells. It can be applied to virtually all GEMMs of pancreatic disease and other diseases and cancers that arise from ductal structures. Because most carcinomas resemble ductal structures, this protocol has utility in the study of other cancers in addition to PDAC, such as breast and prostate cancers. © 2015 Cold Spring Harbor Laboratory Press.
Blind quantum computation with identity authentication
NASA Astrophysics Data System (ADS)
Li, Qin; Li, Zhulin; Chan, Wai Hong; Zhang, Shengyu; Liu, Chengdong
2018-04-01
Blind quantum computation (BQC) allows a client with relatively few quantum resources or poor quantum technologies to delegate his computational problem to a quantum server such that the client's input, output, and algorithm are kept private. However, all existing BQC protocols focus on correctness verification of quantum computation but neglect authentication of participants' identity which probably leads to man-in-the-middle attacks or denial-of-service attacks. In this work, we use quantum identification to overcome such two kinds of attack for BQC, which will be called QI-BQC. We propose two QI-BQC protocols based on a typical single-server BQC protocol and a double-server BQC protocol. The two protocols can ensure both data integrity and mutual identification between participants with the help of a third trusted party (TTP). In addition, an unjammable public channel between a client and a server which is indispensable in previous BQC protocols is unnecessary, although it is required between TTP and each participant at some instant. Furthermore, the method to achieve identity verification in the presented protocols is general and it can be applied to other similar BQC protocols.
Choi, Younsung; Lee, Donghoon; Kim, Jiye; Jung, Jaewook; Nam, Junghyun; Won, Dongho
2014-01-01
Wireless sensor networks (WSNs) consist of sensors, gateways and users. Sensors are widely distributed to monitor various conditions, such as temperature, sound, speed and pressure but they have limited computational ability and energy. To reduce the resource use of sensors and enhance the security of WSNs, various user authentication protocols have been proposed. In 2011, Yeh et al. first proposed a user authentication protocol based on elliptic curve cryptography (ECC) for WSNs. However, it turned out that Yeh et al.'s protocol does not provide mutual authentication, perfect forward secrecy, and key agreement between the user and sensor. Later in 2013, Shi et al. proposed a new user authentication protocol that improves both security and efficiency of Yeh et al.'s protocol. However, Shi et al.'s improvement introduces other security weaknesses. In this paper, we show that Shi et al.'s improved protocol is vulnerable to session key attack, stolen smart card attack, and sensor energy exhausting attack. In addition, we propose a new, security-enhanced user authentication protocol using ECC for WSNs. PMID:24919012
Choi, Younsung; Lee, Donghoon; Kim, Jiye; Jung, Jaewook; Nam, Junghyun; Won, Dongho
2014-06-10
Wireless sensor networks (WSNs) consist of sensors, gateways and users. Sensors are widely distributed to monitor various conditions, such as temperature, sound, speed and pressure but they have limited computational ability and energy. To reduce the resource use of sensors and enhance the security of WSNs, various user authentication protocols have been proposed. In 2011, Yeh et al. first proposed a user authentication protocol based on elliptic curve cryptography (ECC) for WSNs. However, it turned out that Yeh et al.'s protocol does not provide mutual authentication, perfect forward secrecy, and key agreement between the user and sensor. Later in 2013, Shi et al. proposed a new user authentication protocol that improves both security and efficiency of Yeh et al.'s protocol. However, Shi et al.'s improvement introduces other security weaknesses. In this paper, we show that Shi et al.'s improved protocol is vulnerable to session key attack, stolen smart card attack, and sensor energy exhausting attack. In addition, we propose a new, security-enhanced user authentication protocol using ECC for WSNs.
Berkson, Burton M; Rubin, Daniel M; Berkson, Arthur J
2009-12-01
The authors, in a previous article, described the long-term survival of a man with pancreatic cancer and metastases to the liver, treated with intravenous alpha-lipoic acid and oral low-dose naltrexone (ALA/N) without any adverse effects. He is alive and well 78 months after initial presentation. Three additional pancreatic cancer case studies are presented in this article. At the time of this writing, the first patient, GB, is alive and well 39 months after presenting with adenocarcinoma of the pancreas with metastases to the liver. The second patient, JK, who presented to the clinic with the same diagnosis was treated with the ALA/N protocol and after 5 months of therapy, PET scan demonstrated no evidence of disease. The third patient, RC, in addition to his pancreatic cancer with liver and retroperitoneal metastases, has a history of B-cell lymphoma and prostate adenocarcinoma. After 4 months of the ALA/N protocol his PET scan demonstrated no signs of cancer. In this article, the authors discuss the poly activity of ALA: as an agent that reduces oxidative stress, its ability to stabilize NF(k)B, its ability to stimulate pro-oxidant apoptosic activity, and its discriminative ability to discourage the proliferation of malignant cells. In addition, the ability of lowdose naltrexone to modulate an endogenous immune response is discussed. This is the second article published on the ALA/N protocol and the authors believe the protocol warrants clinical trial.
Gilmore, Brynne; Adams, Ben Jack; Bartoloni, Alex; Alhaydar, Bana; McAuliffe, Eilish; Raven, Joanna; Taegtmeyer, Miriam; Vallières, Frédérique
2016-01-01
Introduction Understanding what enhances the motivation and performance of community health workers (CHWs) in humanitarian emergencies represents a key research gap within the field of human resources for health. This paper presents the research protocol for the Performance ImprovEment of CHWs in Emergency Settings (PIECES) research programme. Enhancing Learning and Research in Humanitarian Action (ELRHA) funded the development of this protocol as part of their Health in Humanitarian Crises (R2HC) call (No.19839). PIECES aims to understand what factors improve the performance of CHWs in level III humanitarian emergencies. Methods and analysis The suggested protocol uses a realist evaluation with multiple cases across the 3 country sites: Turkey, Iraq and Lebanon. Working with International Medical Corps (IMC), an initial programme theory was elicited through literature and document reviews, semistructured interviews and focus groups with IMC programme managers and CHWs. Based on this initial theory, this protocol proposes a combination of semistructured interviews, life histories and critical incident narratives, surveys and latent variable modelling of key constructs to explain how contextual factors work to trigger mechanisms for specific outcomes relating to IMC's 300+ CHWs' performance. Participants will also include programme staff, CHWs and programme beneficiaries. Realist approaches will be used to better understand ‘what works, for whom and under what conditions’ for improving CHW performance within humanitarian contexts. Ethics and dissemination Trinity College Dublin's Health Policy and Management/Centre for Global Health Research Ethics Committee gave ethical approval for the protocol development phase. For the full research project, additional ethical approval will be sought from: Université St. Joseph (Lebanon), the Ethics Committee of the Ministry of Health in Baghdad (Iraq) and the Middle East Technical University (Turkey). Dissemination activities will involve a mixture of research feedback, policy briefs, guidelines and recommendations, as well as open source academic articles. PMID:27531730
Gilmore, Brynne; Adams, Ben Jack; Bartoloni, Alex; Alhaydar, Bana; McAuliffe, Eilish; Raven, Joanna; Taegtmeyer, Miriam; Vallières, Frédérique
2016-08-16
Understanding what enhances the motivation and performance of community health workers (CHWs) in humanitarian emergencies represents a key research gap within the field of human resources for health. This paper presents the research protocol for the Performance ImprovEment of CHWs in Emergency Settings (PIECES) research programme. Enhancing Learning and Research in Humanitarian Action (ELRHA) funded the development of this protocol as part of their Health in Humanitarian Crises (R2HC) call (No.19839). PIECES aims to understand what factors improve the performance of CHWs in level III humanitarian emergencies. The suggested protocol uses a realist evaluation with multiple cases across the 3 country sites: Turkey, Iraq and Lebanon. Working with International Medical Corps (IMC), an initial programme theory was elicited through literature and document reviews, semistructured interviews and focus groups with IMC programme managers and CHWs. Based on this initial theory, this protocol proposes a combination of semistructured interviews, life histories and critical incident narratives, surveys and latent variable modelling of key constructs to explain how contextual factors work to trigger mechanisms for specific outcomes relating to IMC's 300+ CHWs' performance. Participants will also include programme staff, CHWs and programme beneficiaries. Realist approaches will be used to better understand 'what works, for whom and under what conditions' for improving CHW performance within humanitarian contexts. Trinity College Dublin's Health Policy and Management/Centre for Global Health Research Ethics Committee gave ethical approval for the protocol development phase. For the full research project, additional ethical approval will be sought from: Université St. Joseph (Lebanon), the Ethics Committee of the Ministry of Health in Baghdad (Iraq) and the Middle East Technical University (Turkey). Dissemination activities will involve a mixture of research feedback, policy briefs, guidelines and recommendations, as well as open source academic articles. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation
Budde, Matthew D.; Skinner, Nathan P.; Muftuler, L. Tugan; Schmit, Brian D.; Kurpad, Shekar N.
2017-01-01
Diffusion tensor imaging (DTI) is a promising biomarker of spinal cord injury (SCI). In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE) have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE) approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV) excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI. Overall, the results and optimizations describe a protocol that mitigates several difficulties with DTI of the spinal cord. Detection of acute axonal damage in the injured or diseased spinal cord will benefit the optimized filter-probe diffusion MRI protocol outlined here. PMID:29311786
Current fluctuations in periodically driven systems
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Chetrite, Raphael
2018-05-01
Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.
Gimeno, Isabel M; Witter, Richard L; Cortes, Aneg L; Reddy, Sanjay M; Pandiri, Arun R
2012-01-01
Revaccination, the practice of administering Marek's disease (MD) vaccine a second time, has been used in commercial poultry flocks for many years. The rationale is largely anecdotal as the few published reports have failed to provide support for the value of the practice. In the present work, we have standardized a model to study MD revaccination under laboratory conditions. Nine bird experiments were conducted to evaluate homologous revaccination (same vaccine administered twice) and heterologous revaccination (administration of two different vaccines) with various challenge models. Our results demonstrated that heterologous revaccination (with a second vaccine more protective than the first vaccine) but not homologous revaccination provided a beneficial increase in protection. Administration of the first vaccine at 18 days of embryonation followed by a more protective second vaccine at hatch reproduced systematically the benefits of revaccination. In addition, our results show that revaccination protocols might aid in solving major drawbacks associated with various highly protective experimental MD vaccines; that is, lymphoid organ atrophy and residual virulence. Strain RM1 is one of the most protective vaccines against early challenge with highly virulent MD virus but it induces severe lymphoid atrophy in chickens lacking maternal antibodies against MD virus. In this study, strain RM1 did not induce lymphoid organ atrophy when administered as second vaccine in a revaccination protocol. Similarly, strain 648A100/BP5 maintains residual virulence in chickens lacking maternal antibodies against MD virus but did not induce any lesions when used as a second vaccine. Until now, arbitrary revaccination protocols have been occasionally proven useful to the poultry industry. The model developed in this study will allow for a better understanding of this phenomenon and its optimization. A more rational use of this practice will be of great help to control MD outbreaks until better vaccines are available.
Pessina, A; Albella, B; Bueren, J; Brantom, P; Casati, S; Gribaldo, L; Croera, C; Gagliardi, G; Foti, P; Parchment, R; Parent-Massin, D; Sibiril, Y; Van Den Heuvel, R
2001-12-01
This report describes an international prevalidation study conducted to optimise the Standard Operating Procedure (SOP) for detecting myelosuppressive agents by CFU-GM assay and to study a model for predicting (by means of this in vitro hematopoietic assay) the acute xenobiotic exposure levels that cause maximum tolerated decreases in absolute neutrophil counts (ANC). In the first phase of the study (Protocol Refinement), two SOPs were assessed, by using two cell culture media (Test A, containing GM-CSF; and Test B, containing G-CSF, GM-CSF, IL-3, IL-6 and SCF), and the two tests were applied to cells from both human (bone marrow and umbilical cord blood) and mouse (bone marrow) CFU-GM. In the second phase (Protocol Transfer), the SOPs were transferred to four laboratories to verify the linearity of the assay response and its interlaboratory reproducibility. After a further phase (Protocol Performance), dedicated to a training set of six anticancer drugs (adriamycin, flavopindol, morpholino-doxorubicin, pyrazoloacridine, taxol and topotecan), a model for predicting neutropenia was verified. Results showed that the assay is linear under SOP conditions, and that the in vitro endpoints used by the clinical prediction model of neutropenia are highly reproducible within and between laboratories. Valid tests represented 95% of all tests attempted. The 90% inhibitory concentration values (IC(90)) from Test A and Test B accurately predicted the human maximum tolerated dose (MTD) for five of six and for four of six myelosuppressive anticancer drugs, respectively, that were selected as prototype xenobiotics. As expected, both tests failed to accurately predict the human MTD of a drug that is a likely protoxicant. It is concluded that Test A offers significant cost advantages compared to Test B, without any loss of performance or predictive accuracy. On the basis of these results, we proposed a formal Phase II validation study using the Test A SOP for 16-18 additional xenobiotics that represent the spectrum of haematotoxic potential.
Fischer, Carlos N; Campos, Victor De A; Barella, Victor H
2018-05-01
Profile hidden Markov models (pHMMs) have been used to search for transposable elements (TEs) in genomes. For the learning of pHMMs aimed to search for TEs of the retrotransposon class, the conventional protocol is to use the whole internal nucleotide portions of these elements as representative sequences. To further explore the potential of pHMMs in such a search, we propose five alternative ways to obtain the sets of representative sequences of TEs other than the conventional protocol. In this study, we are interested in Bel-PAO, Copia, Gypsy, and DIRS superfamilies from the retrotransposon class. We compared the pHMMs of all six protocols. The test results show that, for each TE superfamily, the pHMMs of at least two of the proposed protocols performed better than the conventional one and that the number of correct predictions provided by the latter can be improved by considering together the results of one or more of the alternative protocols.
Using connectome-based predictive modeling to predict individual behavior from brain connectivity
Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd
2017-01-01
Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017
Wei, Liang-Liang; Wang, Kun; Zhao, Qing-Liang; Jiang, Jun-Qiu; Kong, Xiang-Juan; Lee, Duu-Jong
2012-09-15
Correlation between fractional, biodegradable and spectral characteristics of sludge extracellular polymeric substances (EPS) by different protocols has not been well established. This work extracted sludge EPS using alkaline extractants (NH₄OH and formaldehyde + NaOH) and physical protocols (ultrasonication, heating at 80 °C or cation exchange resin (CER)) and then fractionated the extracts using XAD-8/XAD-4 resins. The alkaline extractants yielded more sludge EPS than the physical protocols. However, the physical protocols extracted principally the hydrophilic components which were readily biodegradable by microorganisms. The alkaline extractants dissolved additional humic-like substances from sludge solids which were refractory in nature. Different extraction protocols preferably extracted EPS with distinct fractional, biodegradable and spectral characteristics which could be applied in specific usages. Copyright © 2012 Elsevier Ltd. All rights reserved.
Paes, Thaís; Belo, Letícia Fernandes; da Silva, Diego Rodrigues; Morita, Andrea Akemi; Donária, Leila; Furlanetto, Karina Couto; Sant'Anna, Thaís; Pitta, Fabio; Hernandes, Nidia Aparecida
2017-03-01
It is important to assess activities of daily living (ADL) in older adults due to impairment of independence and quality of life. However, there is no objective and standardized protocol available to assess this outcome. Thus, the aim of this study was to verify the reproducibility and validity of a new protocol for ADL assessment applied in physically independent adults age ≥50 y, the Londrina ADL protocol, and to establish an equation to predict reference values of the Londrina ADL protocol. Ninety-three physically independent adults age ≥50 y had their performance in ADL evaluated by registering the time spent to conclude the protocol. The protocol was performed twice. The 6-min walk test, which assesses functional exercise capacity, was used as a validation criterion. A multiple linear regression model was applied, including anthropometric and demographic variables that correlated with the protocol, to establish an equation to predict the protocol's reference values. In general, the protocol was reproducible (intraclass correlation coefficient 0.91). The average difference between the first and second protocol was 5.3%. The new protocol was valid to assess ADL performance in the studied subjects, presenting a moderate correlation with the 6-min walk test (r = -0.53). The time spent to perform the protocol correlated significantly with age (r = 0.45) but neither with weight (r = -0.17) nor with height (r = -0.17). A model of stepwise multiple regression including sex and age showed that age was the only determinant factor to the Londrina ADL protocol, explaining 21% ( P < .001) of its variability. The derived reference equation was: Londrina ADL protocol pred (s) = 135.618 + (3.102 × age [y]). The Londrina ADL protocol was reproducible and valid in physically independent adults age ≥50 y. A reference equation for the protocol was established including only age as an independent variable (r 2 = 0.21), allowing a better interpretation of the protocol's results in clinical practice. Copyright © 2017 by Daedalus Enterprises.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
A psychoengineering paradigm for the neurocognitive mechanisms of biofeedback and neurofeedback.
Gaume, A; Vialatte, A; Mora-Sánchez, A; Ramdani, C; Vialatte, F B
2016-09-01
We believe that the missing keystone to design effective and efficient biofeedback and neurofeedback protocols is a comprehensive model of the mechanisms of feedback learning. In this manuscript we review the learning models in behavioral, developmental and cognitive psychology, and derive a synthetic model of the psychological perspective on biofeedback. We afterwards review the neural correlates of feedback learning mechanisms, and present a general neuroscience model of biofeedback. We subsequently show how biomedical engineering principles can be applied to design efficient feedback protocols. We finally present an integrative psychoengineering model of the feedback learning processes, and provide new guidelines for the efficient design of biofeedback and neurofeedback protocols. We identify five key properties, (1) perceptibility=can the subject perceive the biosignal?, (2) autonomy=can the subject regulate by himself?, (3) mastery=degree of control over the biosignal, (4) motivation=rewards system of the biofeedback, and (5) learnability=possibility of learning. We conclude with guidelines for the investigation and promotion of these properties in biofeedback protocols. Copyright © 2016 Elsevier Ltd. All rights reserved.
Marvin, Michael R; Inzucchi, Silvio E; Besterman, Brian J
2016-08-01
The management of hyperglycemia in the intensive care unit has been a controversial topic for more than a decade, with target ranges varying from 80-110 mg/dL to <200 mg/dL. Multiple insulin infusion protocols exist, including several computerized protocols, which have attempted to achieve these targets. Importantly, compliance with these protocols has not been a focus of clinical studies. GlucoCare™, a Food and Drug Administration (FDA)-cleared insulin-dosing calculator, was originally designed based on the Yale Insulin Infusion Protocol to target 100-140 mg/dL and has undergone several modifications to reduce hypoglycemia. The original Yale protocol was modified from 100-140 mg/dL to a range of 120-140 mg/dL (GlucoCare 120-140) and then to 140 mg/dL (GlucoCare 140, not a range but a single blood glucose [BG] level target) in an iterative and evidence-based manner to eliminate hypoglycemia <70 mg/dL. The final modification [GlucoCare 140(B)] includes the addition of bolus insulin "midprotocol" during an insulin infusion to reduce peak insulin rates for insulin-resistant patients. This study examined the results of these protocol modifications and evaluated the role of compliance with the protocol in the incidence of hypoglycemia <70 mg/dL. Protocol modifications resulted in mean BG levels of 133.4, 136.4, 143.8, and 146.4 mg/dL and hypoglycemic BG readings <70 mg/dL of 0.998%, 0.367%, 0.256%, and 0.04% for the 100-140, 120-140, 140, and 140(B) protocols, respectively (P < 0.001). Adherence to the glucose check interval significantly reduced the incidence of hypoglycemia (P < 0.001). Protocol modifications led to a reduction in peak insulin infusion rates (P < 0.001) and the need for dextrose-containing boluses (P < 0.001). This study demonstrates that refinements in protocol design can improve glucose control in critically ill patients and that the use of GlucoCare 140(B) can eliminate all significant hypoglycemia while achieving mean glucose levels between 140 and 150 mg/dL. In addition, attention to the timely performance of glucose levels can also reduce hypoglycemic events.
NASA Astrophysics Data System (ADS)
Wehner, Michael; Stone, Dáithí; Mitchell, Dann; Shiogama, Hideo; Fischer, Erich; Graff, Lise S.; Kharin, Viatcheslav V.; Lierhammer, Ludwig; Sanderson, Benjamin; Krishnan, Harinarayan
2018-03-01
The half a degree additional warming, prognosis and projected impacts (HAPPI) experimental protocol provides a multi-model database to compare the effects of stabilizing anthropogenic global warming of 1.5 °C over preindustrial levels to 2.0 °C over these levels. The HAPPI experiment is based upon large ensembles of global atmospheric models forced by sea surface temperature and sea ice concentrations plausible for these stabilization levels. This paper examines changes in extremes of high temperatures averaged over three consecutive days. Changes in this measure of extreme temperature are also compared to changes in hot season temperatures. We find that over land this measure of extreme high temperature increases from about 0.5 to 1.5 °C over present-day values in the 1.5 °C stabilization scenario, depending on location and model. We further find an additional 0.25 to 1.0 °C increase in extreme high temperatures over land in the 2.0 °C stabilization scenario. Results from the HAPPI models are consistent with similar results from the one available fully coupled climate model. However, a complicating factor in interpreting extreme temperature changes across the HAPPI models is their diversity of aerosol forcing changes.
Miniaturization of the Clonogenic Assay Using Confluence Measurement
Mayr, Christian; Beyreis, Marlena; Dobias, Heidemarie; Gaisberger, Martin; Pichler, Martin; Ritter, Markus; Jakab, Martin; Neureiter, Daniel; Kiesslich, Tobias
2018-01-01
The clonogenic assay is a widely used method to study the ability of cells to ‘infinitely’ produce progeny and is, therefore, used as a tool in tumor biology to measure tumor-initiating capacity and stem cell status. However, the standard protocol of using 6-well plates has several disadvantages. By miniaturizing the assay to a 96-well microplate format, as well as by utilizing the confluence detection function of a multimode reader, we here describe a new and modified protocol that allows comprehensive experimental setups and a non-endpoint, label-free semi-automatic analysis. Comparison of bright field images with confluence images demonstrated robust and reproducible detection of clones by the confluence detection function. Moreover, time-resolved non-endpoint confluence measurement of the same well showed that semi-automatic analysis was suitable for determining the mean size and colony number. By treating cells with an inhibitor of clonogenic growth (PTC-209), we show that our modified protocol is suitable for comprehensive (broad concentration range, addition of technical replicates) concentration- and time-resolved analysis of the effect of substances or treatments on clonogenic growth. In summary, this protocol represents a time- and cost-effective alternative to the commonly used 6-well protocol (with endpoint staining) and also provides additional information about the kinetics of clonogenic growth. PMID:29510509
Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries
NASA Astrophysics Data System (ADS)
Perez, Hector Eduardo
This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.
A Self-Stabilizing Byzantine-Fault-Tolerant Clock Synchronization Protocol
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2009-01-01
This report presents a rapid Byzantine-fault-tolerant self-stabilizing clock synchronization protocol that is independent of application-specific requirements. It is focused on clock synchronization of a system in the presence of Byzantine faults after the cause of any transient faults has dissipated. A model of this protocol is mechanically verified using the Symbolic Model Verifier (SMV) [SMV] where the entire state space is examined and proven to self-stabilize in the presence of one arbitrary faulty node. Instances of the protocol are proven to tolerate bursts of transient failures and deterministically converge with a linear convergence time with respect to the synchronization period. This protocol does not rely on assumptions about the initial state of the system other than the presence of sufficient number of good nodes. All timing measures of variables are based on the node s local clock, and no central clock or externally generated pulse is used. The Byzantine faulty behavior modeled here is a node with arbitrarily malicious behavior that is allowed to influence other nodes at every clock tick. The only constraint is that the interactions are restricted to defined interfaces.
Gabr, Mahmoud M; Zakaria, Mahmoud M; Refaie, Ayman F; Khater, Sherry M; Ashamallah, Sylvia A; Ismail, Amani M; El-Badri, Nagwa; Ghoneim, Mohamed A
2014-01-01
Many protocols were utilized for directed differentiation of mesenchymal stem cells (MSCs) to form insulin-producing cells (IPCs). We compared the relative efficiency of three differentiation protocols. Human bone marrow-derived MSCs (HBM-MSCs) were obtained from three insulin-dependent type 2 diabetic patients. Differentiation into IPCs was carried out by three protocols: conophylline-based (one-step protocol), trichostatin-A-based (two-step protocol), and β -mercaptoethanol-based (three-step protocol). At the end of differentiation, cells were evaluated by immunolabeling for insulin production, expression of pancreatic endocrine genes, and release of insulin and c-peptide in response to increasing glucose concentrations. By immunolabeling, the proportion of generated IPCs was modest ( ≃ 3%) in all the three protocols. All relevant pancreatic endocrine genes, insulin, glucagon, and somatostatin, were expressed. There was a stepwise increase in insulin and c-peptide release in response to glucose challenge, but the released amounts were low when compared with those of pancreatic islets. The yield of functional IPCs following directed differentiation of HBM-MSCs was modest and was comparable among the three tested protocols. Protocols for directed differentiation of MSCs need further optimization in order to be clinically meaningful. To this end, addition of an extracellular matrix and/or a suitable template should be attempted.
Human Schedule Performance, Protocol Analysis, and the "Silent Dog" Methodology
ERIC Educational Resources Information Center
Cabello, Francisco; Luciano, Carmen; Gomez, Inmaculada; Barnes-Holmes, Dermot
2004-01-01
The purpose of the current experiment was to investigate the role of private verbal behavior on the operant performances of human adults, using a protocol analysis procedure with additional methodological controls (the "silent dog" method). Twelve subjects were exposed to fixed ratio 8 and differential reinforcement of low rate 3-s schedules. For…
Suraniti, Emmanuel; Studer, Vincent; Sojic, Neso; Mano, Nicolas
2011-04-01
Immobilization and electrical wiring of enzymes is of particular importance for the elaboration of efficient biosensors and can be cumbersome. Here, we report a fast and easy protocol for enzyme immobilization, and as a proof of concept, we applied it to the immobilization of bilirubin oxidase, a labile enzyme. In the first step, bilirubin oxidase is mixed with a redox hydrogel "wiring" the enzyme reaction centers to electrodes. Then, this adduct is covered by an outer layer of PEGDA made by photoinitiated polymerization of poly(ethylene-glycol) diacrylate (PEGDA) and a photoclivable precursor, DAROCUR. This two-step protocol is 18 times faster than the current state-of-the-art protocol and leads to currents 25% higher. In addition, the outer layer of PEGDA acts as a protective layer increasing the lifetime of the electrode by 100% when operating continuously for 2000 s and by 60% when kept in dry state for 24 h. This new protocol is particularly appropriate for labile enzymes that quickly denaturate. In addition, by tuning the ratio PEGDA/DAROCUR, it is possible to make the enzyme electrodes even more active or more stable.
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet
2016-01-01
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262
Recovery of gait after quadriceps muscle fatigue.
Barbieri, Fabio Augusto; Beretta, Stephannie Spiandor; Pereira, Vinicius A I; Simieli, Lucas; Orcioli-Silva, Diego; dos Santos, Paulo Cezar Rocha; van Dieën, Jaap H; Gobbi, Lilian Teresa Bucken
2016-01-01
The aim of this study was to investigate the effect of recovery time after quadriceps muscle fatigue on gait in young adults. Forty young adults (20-40 years old) performed three 8-m gait trials at preferred velocity before and after muscle fatigue, and after 5, 10 and 20min of passive rest. In addition, at each time point, two maximal isometric voluntary contractions were preformed. Muscle fatigue was induced by repeated sit-to-stand transfers until task failure. Spatio-temporal, kinetic and muscle activity parameters, measured in the central stride of each trial, were analyzed. Data were compared between before and after the muscle fatigue protocol and after the recovery periods by one-way repeated measures ANOVA. The voluntary force was decreased after the fatigue protocol (p<0.001) and after 5, 10 and 20min of recovery compared to before the fatigue protocol. Step width (p<0.001) and RMS of biceps femoris (p<0.05) were increased immediately after the fatigue protocol and remained increased after the recovery periods. In addition, stride duration was decreased immediately after the fatigue protocol compared to before and to after 10 and 20min of rest (p<0.001). The anterior-posterior propulsive impulse was also decreased after the fatigue protocol (p<0.001) and remained low after 5, 10 and 20min of rest. We conclude that 20min is not enough to see full recovery of gait after exhaustive quadriceps muscle fatigue. Copyright © 2015 Elsevier B.V. All rights reserved.
Manders, I G; Stoecklein, K; Lubach, C H C; Bijl-Oeldrich, J; Nanayakkara, P W B; Rauwerda, J A; Kramer, M H H; Eekhoff, E M W
2016-06-01
To investigate the feasibility, safety and efficacy of the Nurse-Driven Diabetes In-Hospital Treatment protocol (N-DIABIT), which consists of nurse-driven correctional therapy, in addition to physician-guided basal therapy, and is carried out by trained ward nurses. Data on 210 patients with diabetes consecutively admitted in the 5-month period after the introduction of N-DIABIT (intervention group) were compared with the retrospectively collected data on 200 consecutive patients with diabetes admitted in the 5-month period before N-DIABIT was introduced (control group). Additional per-protocol analyses were performed in patients in whom mean patient-based protocol adherence was ≥ 70% (intervention subgroup, n = 173 vs. control subgroup, n = 196). There was no difference between the intervention and the control group in mean blood glucose levels (8.9 ± 0.1 and 9.1 ± 0.2 mmol/l, respectively; P = 0.38), consecutive hyperglycaemic (blood glucose ≥ 10.0 mmol/l) episodes; P = 0.15), admission duration (P = 0.79), mean number of blood glucose measurements (P = 0.21) and incidence of severe hypoglycaemia (P = 0.29). Per-protocol analyses showed significant reductions in mean blood glucose levels and consecutive hypoglycaemia and hyperglycaemia in the intervention compared with the control group. Implementation of N-DIABIT by trained ward nurses in non-intensive care unit diabetes care is feasible, safe and non-inferior to physician-driven care alone. High protocol adherence was associated with improved glycaemic control. © 2015 Diabetes UK.
ERIC Educational Resources Information Center
Peacock, Christopher
2012-01-01
The purpose of this research effort was to develop a model that provides repeatable Location Management (LM) testing using a network simulation tool, QualNet version 5.1 (2011). The model will provide current and future protocol developers a framework to simulate stable protocol environments for development. This study used the Design Science…
Llor, Jesús; Malumbres, Manuel P
2012-01-01
Several Medium Access Control (MAC) and routing protocols have been developed in the last years for Underwater Wireless Sensor Networks (UWSNs). One of the main difficulties to compare and validate the performance of different proposals is the lack of a common standard to model the acoustic propagation in the underwater environment. In this paper we analyze the evolution of underwater acoustic prediction models from a simple approach to more detailed and accurate models. Then, different high layer network protocols are tested with different acoustic propagation models in order to determine the influence of environmental parameters on the obtained results. After several experiments, we can conclude that higher-level protocols are sensitive to both: (a) physical layer parameters related to the network scenario and (b) the acoustic propagation model. Conditions like ocean surface activity, scenario location, bathymetry or floor sediment composition, may change the signal propagation behavior. So, when designing network architectures for UWSNs, the role of the physical layer should be seriously taken into account in order to assert that the obtained simulation results will be close to the ones obtained in real network scenarios.
Llor, Jesús; Malumbres, Manuel P.
2012-01-01
Several Medium Access Control (MAC) and routing protocols have been developed in the last years for Underwater Wireless Sensor Networks (UWSNs). One of the main difficulties to compare and validate the performance of different proposals is the lack of a common standard to model the acoustic propagation in the underwater environment. In this paper we analyze the evolution of underwater acoustic prediction models from a simple approach to more detailed and accurate models. Then, different high layer network protocols are tested with different acoustic propagation models in order to determine the influence of environmental parameters on the obtained results. After several experiments, we can conclude that higher-level protocols are sensitive to both: (a) physical layer parameters related to the network scenario and (b) the acoustic propagation model. Conditions like ocean surface activity, scenario location, bathymetry or floor sediment composition, may change the signal propagation behavior. So, when designing network architectures for UWSNs, the role of the physical layer should be seriously taken into account in order to assert that the obtained simulation results will be close to the ones obtained in real network scenarios. PMID:22438712
EXACT2: the semantics of biomedical protocols
2014-01-01
Background The reliability and reproducibility of experimental procedures is a cornerstone of scientific practice. There is a pressing technological need for the better representation of biomedical protocols to enable other agents (human or machine) to better reproduce results. A framework that ensures that all information required for the replication of experimental protocols is essential to achieve reproducibility. Methods We have developed the ontology EXACT2 (EXperimental ACTions) that is designed to capture the full semantics of biomedical protocols required for their reproducibility. To construct EXACT2 we manually inspected hundreds of published and commercial biomedical protocols from several areas of biomedicine. After establishing a clear pattern for extracting the required information we utilized text-mining tools to translate the protocols into a machine amenable format. We have verified the utility of EXACT2 through the successful processing of previously 'unseen' (not used for the construction of EXACT2) protocols. Results The paper reports on a fundamentally new version EXACT2 that supports the semantically-defined representation of biomedical protocols. The ability of EXACT2 to capture the semantics of biomedical procedures was verified through a text mining use case. In this EXACT2 is used as a reference model for text mining tools to identify terms pertinent to experimental actions, and their properties, in biomedical protocols expressed in natural language. An EXACT2-based framework for the translation of biomedical protocols to a machine amenable format is proposed. Conclusions The EXACT2 ontology is sufficient to record, in a machine processable form, the essential information about biomedical protocols. EXACT2 defines explicit semantics of experimental actions, and can be used by various computer applications. It can serve as a reference model for for the translation of biomedical protocols in natural language into a semantically-defined format. PMID:25472549
Lee, Young Han
2018-04-04
The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.
Modeling Collective Animal Behavior with a Cognitive Perspective: A Methodological Framework
Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy
2012-01-01
The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters. PMID:22761685
Modeling collective animal behavior with a cognitive perspective: a methodological framework.
Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy
2012-01-01
The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters.
Detecting Cyber Attacks On Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Campbell, Roy
This paper proposes an unconventional anomaly detection approach that provides digital instrumentation and control (I&C) systems in a nuclear power plant (NPP) with the capability to probabilistically discern between legitimate protocol frames and attack frames. The stochastic activity network (SAN) formalism is used to model the fusion of protocol activity in each digital I&C system and the operation of physical components of an NPP. SAN models are employed to analyze links between protocol frames as streams of bytes, their semantics in terms of NPP operations, control data as stored in the memory of I&C systems, the operations of I&C systems on NPP components, and NPP processes. Reward rates and impulse rewards are defined in the SAN models based on the activity-marking reward structure to estimate NPP operation profiles. These profiles are then used to probabilistically estimate the legitimacy of the semantics and payloads of protocol frames received by I&C systems.
Open solutions to distributed control in ground tracking stations
NASA Technical Reports Server (NTRS)
Heuser, William Randy
1994-01-01
The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.
Cirrus: Inducing Subject Models from Protocol Data
1988-08-16
behavior scientists, and more recently, by knowledge engineers who wish to embed the knowledge of human experts in an expert system. However, protocol...analysis is notoriously difficult and time comsuming . Several systems have been developed to aid in protocol analysis. Waterman and Newell (1971, 1973...developed a system that could read the natural langauge of the protocol and produce a formal trace of it (a problem behavior graph). The system, however
Molinari, Ana J; Pozzi, Emiliano C C; Monti Hughes, Andrea; Heber, Elisa M; Garabalino, Marcela A; Thorp, Silvia I; Miller, Marcelo; Itoiz, Maria E; Aromando, Romina F; Nigg, David W; Quintana, Jorge; Santa Cruz, Gustavo A; Trivillin, Verónica A; Schwint, Amanda E
2011-04-01
In the present study the therapeutic effect and potential toxicity of the novel "Sequential" boron neutron capture therapy (Seq-BNCT) for the treatment of oral cancer was evaluated in the hamster cheek pouch model at the RA-3 Nuclear Reactor. Two groups of animals were treated with "Sequential" BNCT, i.e., BNCT mediated by boronophenylalanine (BPA) followed by BNCT mediated by sodium decahydrodecaborate (GB-10) either 24 h (Seq-24h-BNCT) or 48 h (Seq-48h-BNCT) later. In an additional group of animals, BPA and GB-10 were administered concomitantly [(BPA + GB-10)-BNCT]. The single-application BNCT was to the same total physical tumor dose as the "Sequential" BNCT treatments. At 28 days post-treatment, Seq-24h-BNCT and Seq-48h-BNCT induced, respectively, overall tumor responses of 95 ± 2% and 91 ± 3%, with no statistically significant differences between protocols. Overall response for the single treatment with (BPA + GB-10)-BNCT was 75 ± 5%, significantly lower than for Seq-BNCT. Both Seq-BNCT protocols and (BPA + GB-10)-BNCT induced reversible mucositis in the dose-limiting precancerous tissue around treated tumors, reaching Grade 3/4 mucositis in 47 ± 12% and 60 ± 22% of the animals, respectively. No normal tissue toxicity was associated with tumor response for any of the protocols. "Sequential" BNCT enhanced tumor response without an increase in mucositis in dose-limiting precancerous tissue. © 2011 by Radiation Research Society
Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju
2013-01-01
The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507
Tersmette, Derek Gideon; Engberts, Dirk Peter
2017-01-01
The Committee for Medical Ethics (CME) of Leiden University Medical Center (LUMC) was established as the first medical ethics reviewing committee (MREC) in the Netherlands. In the period 2000-2010 the CME received 2,162 protocols for review. Some of these protocols were never approved. Until now, there has existed neither an overview of these failed protocols nor an overview of the reasons for their failure. This report draws on data from the digital database, the physical archives, and the minutes of the meetings of the CME. Additional information has been obtained from the Central Committee on Research involving Human Subjects (CCRH) and survey-based research. Protocols were itemized based on characteristic features and their reviewing procedures were analyzed. In total, 1,952 out of 2,162 research protocols submitted during 2000-2010 (90.3%) were approved by the CME; 210 of 2,162 protocols (9.7%) were not approved. Of these 210 protocols, 177 failed due to reasons not related to CME reviewing. In 15 cases CME reviewing led to protocol failure, while another 10 protocols were rejected outright. Eight of the 210 submitted protocols without approval had been conducted prior to submission. In the aforementioned period, little protocol failure occurred. For the most part, protocol failure was caused by problems that are not CME related. This type of failure has several identifiable factors, none of which have anything to do with the ethical reviewing procedure by the CME. A mere 1.2% of protocols failed due to ethical review. Unacceptable burden and risks to the subject and an inadequate methodology are the most common reasons for this CME-related protocol failure.
Oral intradialytic nutritional supplement use and mortality in hemodialysis patients.
Weiner, Daniel E; Tighiouart, Hocine; Ladik, Vladimir; Meyer, Klemens B; Zager, Philip G; Johnson, Douglas S
2014-02-01
Hemodialysis patients have high mortality rates, potentially reflecting underlying comorbid conditions and ongoing catabolism. Intradialytic oral nutritional supplements may reduce this risk. Retrospective propensity-matched cohort. Maintenance hemodialysis patients treated at Dialysis Clinic Inc facilities who were initiated on a nutritional supplement protocol in September to October 2010 were matched using a propensity score to patients at facilities at which the protocol was not used. Prescription of the protocol, whereby hemodialysis patients with serum albumin levels ≤3.5g/dL would initiate oral protein supplementation during the dialysis procedure. Sensitivity analyses matched on actual supplement intake during the first 3 study months. Covariates included patient and facility characteristics, which were used to develop the propensity scores and adjust multivariable models. All-cause mortality, ascertained though March 2012. Of 6,453 eligible patients in 101 eligible hemodialysis facilities, the protocol was prescribed to 2,700, and 1,278 of these were propensity matched to controls. Mean age was 61 ± 15 (SD) years and median dialysis vintage was 34 months. There were 258 deaths among protocol assignees versus 310 among matched controls during a mean follow-up of 14 months. In matched analyses, protocol prescription was associated with a 29% reduction in the hazard of all-cause mortality (HR, 0.71; 95% CI, 0.58-0.86); adjustment had minimal impact on models. In time-dependent models incorporating change in albumin level, protocol status remained significant but was attenuated in models incorporating a 30-day lag. Similar results were seen in sensitivity analyses of 439 patients receiving supplements who were propensity-matched to controls, with 116 deaths among supplement users versus 140 among controls (HR, 0.79; 95% CI, 0.60-1.05), achieving statistical significance in adjusted models. Observational design, potential residual confounding. Prescription of an oral nutritional supplement protocol and use of oral protein nutritional supplements during hemodialysis are associated with reduced mortality among in-center maintenance hemodialysis patients, an effect likely not mediated by change in serum albumin levels. Copyright © 2014 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Unification of quantum information theory
NASA Astrophysics Data System (ADS)
Abeyesinghe, Anura
We present the unification of many previously disparate results in noisy quantum Shannon theory and the unification of all of noiseless quantum Shannon theory. More specifically we deal here with bipartite, unidirectional, and memoryless quantum Shannon theory. We find all the optimal protocols and quantify the relationship between the resources used, both for the one-shot and for the ensemble case, for what is arguably the most fundamental task in quantum information theory: sharing entangled states between a sender and a receiver. We find that all of these protocols are derived from our one-shot superdense coding protocol and relate nicely to each other. We then move on to noisy quantum information theory and give a simple, direct proof of the "mother" protocol, or rather her generalization to the Fully Quantum Slepian-Wolf protocol (FQSW). FQSW simultaneously accomplishes two goals: quantum communication-assisted entanglement distillation, and state transfer from the sender to the receiver. As a result, in addition to her other "children," the mother protocol generates the state merging primitive of Horodecki, Oppenheim, and Winter as well as a new class of distributed compression protocols for correlated quantum sources, which are optimal for sources described by separable density operators. Moreover, the mother protocol described here is easily transformed into the so-called "father" protocol, demonstrating that the division of single-sender/single-receiver protocols into two families was unnecessary: all protocols in the family are children of the mother.
Jang, Gun Hyuk; Park, Chang-Beom; Kang, Benedict J; Kim, Young Jun; Lee, Kwan Hyi
2016-09-01
Environment and organisms are persistently exposed by a mixture of various substances. However, the current evaluation method is mostly based on an individual substance's toxicity. A systematic toxicity evaluation of heterogeneous substances needs to be established. To demonstrate toxicity assessment of mixture, we chose a group of three typical ingredients in cosmetic sunscreen products that frequently enters ecosystems: benzophenone-3 (BP-3), ethylhexyl methoxycinnamate (EHMC), and titanium dioxide nanoparticle (TiO2 NP). We first determined a range of nominal toxic concentration of each ingredient or substance using Daphnia magna, and then for the subsequent organismal level phenotypic assessment, chose the wild-type zebrafish embryos. Any phenotype change, such as body deformation, led to further examinations on the specific organs of transgenic zebrafish embryos. Based on the systematic toxicity assessments of the heterogeneous substances, we offer a sequential environmental toxicity assessment protocol that starts off by utilizing Daphnia magna to determine a nominal concentration range of each substance and finishes by utilizing the zebrafish embryos to detect defects on the embryos caused by the heterogeneous substances. The protocol showed additive toxic effects of the mixtures. We propose a sequential environmental toxicity assessment protocol for the systematic toxicity screening of heterogeneous substances from Daphnia magna to zebrafish embryo in-vivo models. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sarkar, Sampa; Sarkar, Dhiman
2012-08-01
The development of a macrophage-based, antitubercular high-throughput screening system could expedite discovery programs for identifying novel inhibitors. In this study, the kinetics of nitrate reduction (NR) by Mycobacterium tuberculosis during growth in Thp1 macrophages was found to be almost parallel to viable bacilli count. NR in the culture medium containing 50 mM of nitrate was found to be optimum on the fifth day after infection with M. tuberculosis. The signal-to-noise (S/N) ratio and Z-factor obtained from this macrophage-based assay were 5.4 and 0.965, respectively, which confirms the robustness of the assay protocol. The protocol was further validated by using standard antitubercular inhibitors such as rifampicin, isoniazid, streptomycin, ethambutol, and pyrazinamide, added at their IC(90) value, on the day of infection. These inhibitors were not able to kill the bacilli when added to the culture on the fifth day after infection. Interestingly, pentachlorophenol and rifampicin killed the bacilli immediately after addition on the fifth day of infection. Altogether, this assay protocol using M. tuberculosis-infected Thp-1 macrophages provides a novel, cost-efficient, robust, and easy-to-perform screening platform for the identification of both active and hypoxic stage-specific inhibitors against tuberculosis.
Yu, Yang; Rajagopal, Ram
2015-02-17
Two dispatch protocols have been adopted by electricity markets to deal with the uncertainty of wind power but the effects of the selection between the dispatch protocols have not been comprehensively analyzed. We establish a framework to compare the impacts of adopting different dispatch protocols on the efficacy of using wind power and implementing a carbon tax to reduce emissions. We suggest that a market has high potential to achieve greater emission reduction by adopting the stochastic dispatch protocol instead of the static protocol when the wind energy in the market is highly uncertain or the market has enough adjustable generators, such as gas-fired combustion generators. Furthermore, the carbon-tax policy is more cost-efficient for reducing CO2 emission when the market operates according to the stochastic protocol rather than the static protocol. An empirical study, which is calibrated according to the data from the Electric Reliability Council of Texas market, confirms that using wind energy in the Texas market results in a 12% CO2 emission reduction when the market uses the stochastic dispatch protocol instead of the 8% emission reduction associated with the static protocol. In addition, if a 6$/ton carbon tax is implemented in the Texas market operated according to the stochastic protocol, the CO2 emission is similar to the emission level from the same market with a 16$/ton carbon tax operated according to the static protocol. Correspondingly, the 16$/ton carbon tax associated with the static protocol costs 42.6% more than the 6$/ton carbon tax associated with the stochastic protocol.
Peters, Johanna; Taute, Wolfgang; Bartscher, Kathrin; Döscher, Claas; Höft, Michael; Knöchel, Reinhard; Breitkreutz, Jörg
2017-04-08
Microwave sensor systems using resonance technology at a single resonance in the range of 2-3 GHz have been shown to be a rapid and reliable tool for moisture determination in solid materials including pharmaceutical granules. So far, their application is limited to lower moisture ranges or limitations above certain moisture contents had to be accepted. Aim of the present study was to develop a novel multi-resonance sensor system in order to expand the measurement range. Therefore, a novel sensor using additional resonances over a wide frequency band was designed and used to investigate inherent limitations of first generation sensor systems and material-related limits. Using granule samples with different moisture contents, an experimental protocol for calibration and validation of the method was established. Pursuant to this protocol, a multiple linear regression (MLR) prediction model built by correlating microwave moisture values to the moisture determined by Karl Fischer titration was chosen and rated using conventional criteria such as coefficient of determination (R 2 ) and root mean square error of calibration (RMSEC). Using different operators, different analysis dates and different ambient conditions the method was fully validated following the guidance of ICH Q2(R1). The study clearly showed explanations for measurement uncertainties of first generation sensor systems which confirmed the approach to overcome these by using additional resonances. The established prediction model could be validated in the range of 7.6-19.6%, demonstrating its fit for its future purpose, the moisture content determination during wet granulations. Copyright © 2017 Elsevier B.V. All rights reserved.
Heo, Yun Seok; Lee, Ho-Joon; Hassell, Bryan A; Irimia, Daniel; Toth, Thomas L; Elmoazzen, Heidi; Toner, Mehmet
2011-10-21
Oocyte cryopreservation has become an essential tool in the treatment of infertility by preserving oocytes for women undergoing chemotherapy. However, despite recent advances, pregnancy rates from all cryopreserved oocytes remain low. The inevitable use of the cryoprotectants (CPAs) during preservation affects the viability of the preserved oocytes and pregnancy rates either through CPA toxicity or osmotic injury. Current protocols attempt to reduce CPA toxicity by minimizing CPA concentrations, or by minimizing the volume changes via the step-wise addition of CPAs to the cells. Although the step-wise addition decreases osmotic shock to oocytes, it unfortunately increases toxic injuries due to the long exposure times to CPAs. To address limitations of current protocols and to rationally design protocols that minimize the exposure to CPAs, we developed a microfluidic device for the quantitative measurements of oocyte volume during various CPA loading protocols. We spatially secured a single oocyte on the microfluidic device, created precisely controlled continuous CPA profiles (step-wise, linear and complex) for the addition of CPAs to the oocyte and measured the oocyte volumetric response to each profile. With both linear and complex profiles, we were able to load 1.5 M propanediol to oocytes in less than 15 min and with a volumetric change of less than 10%. Thus, we believe this single oocyte analysis technology will eventually help future advances in assisted reproductive technologies and fertility preservation.
Methods in Molecular Biology Mouse Genetics: Methods and Protocols | Center for Cancer Research
Mouse Genetics: Methods and Protocols provides selected mouse genetic techniques and their application in modeling varieties of human diseases. The chapters are mainly focused on the generation of different transgenic mice to accomplish the manipulation of genes of interest, tracing cell lineages, and modeling human diseases.
Two deterministic models (US EPA’s Office of Pesticide Programs Residential Standard Operating Procedures (OPP Residential SOPs) and Draft Protocol for Measuring Children’s Non-Occupational Exposure to Pesticides by all Relevant Pathways (Draft Protocol)) and four probabilistic mo...
Tenten-Diepenmaat, Marloes; Dekker, Joost; Steenbergen, Menno; Huybrechts, Elleke; Roorda, Leo D; van Schaardenburg, Dirkjan; Bus, Sicco A; van der Leeden, Marike
2016-03-01
Improving foot orthoses (FOs) in patients with rheumatoid arthritis (RA) by using in-shoe plantar pressure measurements seems promising. The objectives of this study were to evaluate (1) the outcome on plantar pressure distribution of FOs that were adapted using in-shoe plantar pressure measurements according to a protocol and (2) the protocol feasibility. Forty-five RA patients with foot problems were included in this observational proof-of concept study. FOs were custom-made by a podiatrist according to usual care. Regions of Interest (ROIs) for plantar pressure reduction were selected. According to a protocol, usual care FOs were evaluated using in-shoe plantar pressure measurements and, if necessary, adapted. Plantar pressure-time integrals at the ROIs were compared between the following conditions: (1) no-FO versus usual care FO and (2) usual care FO versus adapted FO. Semi-structured interviews were held with patients and podiatrists to evaluate the feasibility of the protocol. Adapted FOs were developed in 70% of the patients. In these patients, usual care FOs showed a mean 9% reduction in pressure-time integral at forefoot ROIs compared to no-FOs (p=0.01). FO adaptation led to an additional mean 3% reduction in pressure-time integral (p=0.05). The protocol was considered feasible by patients. Podiatrists considered the protocol more useful to achieve individual rather than general treatment goals. A final protocol was proposed. Using in-shoe plantar pressure measurements for adapting foot orthoses for patients with RA leads to a small additional plantar pressure reduction in the forefoot. Further research on the clinical relevance of this outcome is required. Copyright © 2016. Published by Elsevier B.V.
Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation
NASA Astrophysics Data System (ADS)
Quiroz, Gregory
Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.
The Dutch Linguistic Intraoperative Protocol: a valid linguistic approach to awake brain surgery.
De Witte, E; Satoer, D; Robert, E; Colle, H; Verheyen, S; Visch-Brink, E; Mariën, P
2015-01-01
Intraoperative direct electrical stimulation (DES) is increasingly used in patients operated on for tumours in eloquent areas. Although a positive impact of DES on postoperative linguistic outcome is generally advocated, information about the neurolinguistic methods applied in awake surgery is scarce. We developed for the first time a standardised Dutch linguistic test battery (measuring phonology, semantics, syntax) to reliably identify the critical language zones in detail. A normative study was carried out in a control group of 250 native Dutch-speaking healthy adults. In addition, the clinical application of the Dutch Linguistic Intraoperative Protocol (DuLIP) was demonstrated by means of anatomo-functional models and five case studies. A set of DuLIP tests was selected for each patient depending on the tumour location and degree of linguistic impairment. DuLIP is a valid test battery for pre-, intraoperative and postoperative language testing and facilitates intraoperative mapping of eloquent language regions that are variably located. Copyright © 2014 Elsevier Inc. All rights reserved.
Measuring single-cell gene expression dynamics in bacteria using fluorescence time-lapse microscopy
Young, Jonathan W; Locke, James C W; Altinok, Alphan; Rosenfeld, Nitzan; Bacarian, Tigran; Swain, Peter S; Mjolsness, Eric; Elowitz, Michael B
2014-01-01
Quantitative single-cell time-lapse microscopy is a powerful method for analyzing gene circuit dynamics and heterogeneous cell behavior. We describe the application of this method to imaging bacteria by using an automated microscopy system. This protocol has been used to analyze sporulation and competence differentiation in Bacillus subtilis, and to quantify gene regulation and its fluctuations in individual Escherichia coli cells. The protocol involves seeding and growing bacteria on small agarose pads and imaging the resulting microcolonies. Images are then reviewed and analyzed using our laboratory's custom MATLAB analysis code, which segments and tracks cells in a frame-to-frame method. This process yields quantitative expression data on cell lineages, which can illustrate dynamic expression profiles and facilitate mathematical models of gene circuits. With fast-growing bacteria, such as E. coli or B. subtilis, image acquisition can be completed in 1 d, with an additional 1–2 d for progressing through the analysis procedure. PMID:22179594
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-01
Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the "five Rs" (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider "stem-like cancer cells" (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. In sample calculations with linear quadratic parameters α = 0.3 per Gy, α∕β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are the ones appropriate for individualized treatment protocols, with the smaller values relevant only to protocols for a heterogeneous patient population. On that assumption, boosting is predicted to be highly effective. Front boosting, apart from practical advantages and a possible advantage as regards iatrogenic second cancers, also probably gives a slightly higher TCP than back boosting. If the total number of SLCC at the start of treatment can be measured even roughly, it will provide a highly sensitive way of discriminating between various models and parameter choices. Updated mathematical methods for calculating repopulation allow credible generalizations of earlier results.
RosettaScripts: a scripting language interface to the Rosetta macromolecular modeling suite.
Fleishman, Sarel J; Leaver-Fay, Andrew; Corn, Jacob E; Strauch, Eva-Maria; Khare, Sagar D; Koga, Nobuyasu; Ashworth, Justin; Murphy, Paul; Richter, Florian; Lemmon, Gordon; Meiler, Jens; Baker, David
2011-01-01
Macromolecular modeling and design are increasingly useful in basic research, biotechnology, and teaching. However, the absence of a user-friendly modeling framework that provides access to a wide range of modeling capabilities is hampering the wider adoption of computational methods by non-experts. RosettaScripts is an XML-like language for specifying modeling tasks in the Rosetta framework. RosettaScripts provides access to protocol-level functionalities, such as rigid-body docking and sequence redesign, and allows fast testing and deployment of complex protocols without need for modifying or recompiling the underlying C++ code. We illustrate these capabilities with RosettaScripts protocols for the stabilization of proteins, the generation of computationally constrained libraries for experimental selection of higher-affinity binding proteins, loop remodeling, small-molecule ligand docking, design of ligand-binding proteins, and specificity redesign in DNA-binding proteins.
Development of an HPV Educational Protocol for Adolescents
Wetzel, Caitlin; Tissot, Abbigail; Kollar, Linda M.; Hillard, Paula A.; Stone, Rachel; Kahn, Jessica A.
2007-01-01
Study Objectives To develop an educational protocol about HPV and Pap tests for adolescents, to evaluate the protocol for understandability and clarity, and to evaluate the protocol for its effectiveness in increasing knowledge about HPV. Design In phase 1, investigators and adolescents developed the protocol. In phase 2, adolescents evaluated the protocol qualitatively, investigators evaluated its effectiveness in increasing HPV knowledge in a sample of adolescents, and the protocol was revised. In phase 3, investigators evaluated the effectiveness of the revised protocol in an additional adolescent sample. Setting Urban, hospital-based teen health center. Participants A total of 252 adolescent girls and boys in the three study phases. Main Outcome Measures Pre- and post-protocol knowledge about HPV, measured using a 10- or 11-item scale. Results Scores on the HPV knowledge scale increased significantly (p<.0001) among adolescents who participated in phases 2 and 3 after they received the protocol. Initial differences in scores based on race, insurance type and condom use were not noted post-protocol. Conclusion The protocol significantly increased knowledge scores about HPV in this population, regardless of sociodemographic characteristics and risk behaviors. Effective, developmentally appropriate educational protocols about HPV and Pap tests are particularly important in clinical settings as cervical cancer screening guidelines evolve, HPV DNA testing is integrated into screening protocols, and HPV vaccines become available. In-depth, one-on-one education about HPV may also prevent adverse psychosocial responses and promote healthy sexual and Pap screening behaviors in adolescents with abnormal HPV or Pap test results. Synopsis The investigators developed an educational protocol about HPV and Pap tests and evaluated its effectiveness in increasing knowledge about HPV among adolescents. PMID:17868894
A Unified Fault-Tolerance Protocol
NASA Technical Reports Server (NTRS)
Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey
2004-01-01
Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.
Remote Entanglement by Coherent Multiplication of Concurrent Quantum Signals
NASA Astrophysics Data System (ADS)
Roy, Ananda; Jiang, Liang; Stone, A. Douglas; Devoret, Michel
2015-10-01
Concurrent remote entanglement of distant, noninteracting quantum entities is a crucial function for quantum information processing. In contrast with the existing protocols which employ the addition of signals to generate entanglement between two remote qubits, the continuous variable protocol we present is based on the multiplication of signals. This protocol can be straightforwardly implemented by a novel Josephson junction mixing circuit. Our scheme would be able to generate provable entanglement even in the presence of practical imperfections: finite quantum efficiency of detectors and undesired photon loss in current state-of-the-art devices.
Social Protocols for Agile Virtual Teams
NASA Astrophysics Data System (ADS)
Picard, Willy
Despite many works on collaborative networked organizations (CNOs), CSCW, groupware, workflow systems and social networks, computer support for virtual teams is still insufficient, especially support for agility, i.e. the capability of virtual team members to rapidly and cost efficiently adapt the way they interact to changes. In this paper, requirements for computer support for agile virtual teams are presented. Next, an extension of the concept of social protocol is proposed as a novel model supporting agile interactions within virtual teams. The extended concept of social protocol consists of an extended social network and a workflow model.
1985-03-01
model referred to by the study group . IMCKIiN cuINICATION 31215FOR SDATA TWSU•l Sa *BSTtEO POOTO• AA14FTICT’WI PEM1 ITS ...operating systems, compared the DOD and ISO networking protocol architecture models , the protocols for LAN’s developed by the IEEE and ANSI, reviewed and...be initiated, so as to provide the Air Force a roadmap to guide its * "technology develop •ents. 4,’ /LAN 3-4 .°. SECTION 4.0
Vet, Nienke J; de Wildt, Saskia N; Verlaat, Carin W M; Mooij, Miriam G; Tibboel, Dick; de Hoog, Matthijs; Buysse, Corinne M P
2016-11-01
Our earlier pediatric daily sedation interruption trial showed that daily sedation interruption in addition to protocolized sedation in critically ill children does not reduce duration of mechanical ventilation, length of stay, or amounts of sedative drugs administered when compared with protocolized sedation only, but undersedation was more frequent in the daily sedation interruption + protocolized sedation group. We now report the preplanned analysis comparing short-term health-related quality of life and posttraumatic stress symptoms between the two groups. Preplanned prospective part of a randomized controlled trial. Two tertiary medical-surgical PICUs in the Netherlands. Critically ill children requiring mechanical ventilation. None. Eight weeks after a child's discharge from the PICU, health-related quality of life was assessed with the validated Child Health Questionnaire and, only for children above 4 years old, posttraumatic stress was assessed with the Dutch Children's Responses to Trauma Inventory. Additionally, health-related quality of life of all study patients was compared with Dutch normative data. Of the 113 patients from two participating centers in the original study, 96 patients were eligible for follow-up and 64 patients were included (response rate, 67%). No difference was found with respect to health-related quality of life between the two study groups. None of the eight children more than 4 years old showed posttraumatic stress symptoms. Daily sedation interruption in addition to protocolized sedation for critically ill children did not seem to have an effect on short-term health-related quality of life. Also in view of the earlier found absence of effect on clinical outcome, we cannot recommend the use of daily sedation interruption + protocolized sedation.
Protocols for Molecular Modeling with Rosetta3 and RosettaScripts
2016-01-01
Previously, we published an article providing an overview of the Rosetta suite of biomacromolecular modeling software and a series of step-by-step tutorials [Kaufmann, K. W., et al. (2010) Biochemistry 49, 2987–2998]. The overwhelming positive response to this publication we received motivates us to here share the next iteration of these tutorials that feature de novo folding, comparative modeling, loop construction, protein docking, small molecule docking, and protein design. This updated and expanded set of tutorials is needed, as since 2010 Rosetta has been fully redesigned into an object-oriented protein modeling program Rosetta3. Notable improvements include a substantially improved energy function, an XML-like language termed “RosettaScripts” for flexibly specifying modeling task, new analysis tools, the addition of the TopologyBroker to control conformational sampling, and support for multiple templates in comparative modeling. Rosetta’s ability to model systems with symmetric proteins, membrane proteins, noncanonical amino acids, and RNA has also been greatly expanded and improved. PMID:27490953
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
The Robinson Protocol: a treadmill anaerobic performance test.
Robinson, Ellyn M; Graham, Louise B; Headley, Samuel A
2004-08-01
The current investigation was designed to further examine the reliability of the Robinson protocol, which is a run-to-exhaustion treadmill test. Robinson (10) originally examined this protocol with 5 subjects. The significance of the initial exploratory study was the impetus for expanding the study to examine the reliability of the protocol with a larger sample. Fifteen male subjects participated in 3 trial runs on the treadmill. The first trial was a modified McConnell (7) test to determine the aerobic capacity of each subject. The second and third trials were identical Robinson protocols (10). The first trial run mean, in seconds (262.04 +/- 74.50), was not significantly different from the second trial run mean (257.30 +/- 72.65), p = 0.526 (2 tailed). As expected, trial 1 and trial 2 were highly correlated (intraclass) (r = 0.927, p < 0.001). These results provide additional support for the hypothesis that the Robinson protocol with a greater subject pool is a reliable protocol that can be used in research studies interested in examining various physiological interventions or anaerobic training.
Secret key distillation from shielded two-qubit states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Joonwoo
The quantum states corresponding to a secret key are characterized using the so-called private states, where the key part consisting of a secret key is shielded by the additional systems. Based on the construction, it was shown that a secret key can be distilled from bound entangled states. In this work, I consider the shielded two-qubit states in a key-distillation scenario and derive the conditions under which a secret key can be distilled using the recurrence protocol or the two-way classical distillation, advantage distillation together with one-way postprocessing. From the security conditions, it is shown that a secret key canmore » be distilled from bound entangled states in a much wider range. In addition, I consider the case that in which white noise is added to quantum states and show that the classical distillation protocol still works despite a certain amount of noise although the recurrence protocol does not.« less
Pandis, Nikolaos; Fleming, Padhraig S; Koletsi, Despina; Hopewell, Sally
2016-12-07
It is important that planned randomised trials are justified and placed in the context of the available evidence. The SPIRIT guidelines for reporting clinical trial protocols recommend that a recent and relevant systematic review should be included. The aim of this study was to assess the use of the existing evidence in order to justify trial conduct. Protocols of randomised trials published over a 1-month period (December 2015) indexed in PubMed were obtained. Data on trial characteristics relating to location, design, funding, conflict of interest and type of evidence included for trial justification was extracted in duplicate and independently by two investigators. The frequency of citation of previous research including relevant systematic reviews and randomised trials was assessed. Overall, 101 protocols for RCTs were identified. Most proposed trials were parallel-group (n = 74; 73.3%). Reference to an earlier systematic review with additional randomised trials was found in 9.9% (n = 10) of protocols and without additional trials in 30.7% (n = 31), while reference was made to randomised trials in isolation in 21.8% (n = 22). Explicit justification for the proposed randomised trial on the basis of being the first to address the research question was made in 17.8% (n = 18) of protocols. A randomised controlled trial was not cited in 10.9% (95% CI: 5.6, 18.7) (n = 11), while in 8.9% (95% CI: 4.2, 16.2) (n = 9) of the protocols a systematic review was cited but did not inform trial design. A relatively high percentage of protocols of randomised trials involves prior citation of randomised trials, systematic reviews or both. However, improvements are required to ensure that it is explicit that clinical trials are justified and shaped by contemporary best evidence.
NASA Technical Reports Server (NTRS)
Ferrara, Jeffrey; Calk, William; Atwell, William; Tsui, Tina
2013-01-01
MPISS is an automatic file transfer system that implements a combination of standard and mission-unique transfer protocols required by the Global Precipitation Measurement Mission (GPM) Precipitation Processing System (PPS) to control the flow of data between the MOC and the PPS. The primary features of MPISS are file transfers (both with and without PPS specific protocols), logging of file transfer and system events to local files and a standard messaging bus, short term storage of data files to facilitate retransmissions, and generation of file transfer accounting reports. The system includes a graphical user interface (GUI) to control the system, allow manual operations, and to display events in real time. The PPS specific protocols are an enhanced version of those that were developed for the Tropical Rainfall Measuring Mission (TRMM). All file transfers between the MOC and the PPS use the SSH File Transfer Protocol (SFTP). For reports and data files generated within the MOC, no additional protocols are used when transferring files to the PPS. For observatory data files, an additional handshaking protocol of data notices and data receipts is used. MPISS generates and sends to the PPS data notices containing data start and stop times along with a checksum for the file for each observatory data file transmitted. MPISS retrieves the PPS generated data receipts that indicate the success or failure of the PPS to ingest the data file and/or notice. MPISS retransmits the appropriate files as indicated in the receipt when required. MPISS also automatically retrieves files from the PPS. The unique feature of this software is the use of both standard and PPS specific protocols in parallel. The advantage of this capability is that it supports users that require the PPS protocol as well as those that do not require it. The system is highly configurable to accommodate the needs of future users.
Sachs, Peter B; Hunt, Kelly; Mansoubi, Fabien; Borgstede, James
2017-02-01
Building and maintaining a comprehensive yet simple set of standardized protocols for a cross-sectional image can be a daunting task. A single department may have difficulty preventing "protocol creep," which almost inevitably occurs when an organized "playbook" of protocols does not exist and individual radiologists and technologists alter protocols at will and on a case-by-case basis. When multiple departments or groups function in a large health system, the lack of uniformity of protocols can increase exponentially. In 2012, the University of Colorado Hospital formed a large health system (UCHealth) and became a 5-hospital provider network. CT and MR imaging studies are conducted at multiple locations by different radiology groups. To facilitate consistency in ordering, acquisition, and appearance of a given study, regardless of location, we minimized the number of protocols across all scanners and sites of practice with a clinical indication-driven protocol selection and standardization process. Here we review the steps utilized to perform this process improvement task and insure its stability over time. Actions included creation of a standardized protocol template, which allowed for changes in electronic storage and management of protocols, designing a change request form, and formation of a governance structure. We utilized rapid improvement events (1 day for CT, 2 days for MR) and reduced 248 CT protocols into 97 standardized protocols and 168 MR protocols to 66. Additional steps are underway to further standardize output and reporting of imaging interpretation. This will result in an improved, consistent radiologist, patient, and provider experience across the system.
NASA Astrophysics Data System (ADS)
Florea, Michael; Reeve, Benjamin; Abbott, James; Freemont, Paul S.; Ellis, Tom
2016-03-01
Bacterial cellulose is a strong, highly pure form of cellulose that is used in a range of applications in industry, consumer goods and medicine. Gluconacetobacter hansenii ATCC 53582 is one of the highest reported bacterial cellulose producing strains and has been used as a model organism in numerous studies of bacterial cellulose production and studies aiming to increased cellulose productivity. Here we present a high-quality draft genome sequence for G. hansenii ATCC 53582 and find that in addition to the previously described cellulose synthase operon, ATCC 53582 contains two additional cellulose synthase operons and several previously undescribed genes associated with cellulose production. In parallel, we also develop optimized protocols and identify plasmid backbones suitable for transformation of ATCC 53582, albeit with low efficiencies. Together, these results provide important information for further studies into cellulose synthesis and for future studies aiming to genetically engineer G. hansenii ATCC 53582 for increased cellulose productivity.
Benetti, Marion; Steen-Larsen, Hans Christian; Reverdin, Gilles; Sveinbjörnsdóttir, Árný Erla; Aloisi, Giovanni; Berkelhammer, Max B.; Bourlès, Bernard; Bourras, Denis; de Coetlogon, Gaëlle; Cosgrove, Ann; Faber, Anne-Katrine; Grelet, Jacques; Hansen, Steffen Bo; Johnson, Rod; Legoff, Hervé; Martin, Nicolas; Peters, Andrew J.; Popp, Trevor James; Reynaud, Thierry; Winther, Malte
2017-01-01
The water vapour isotopic composition (1H216O, H218O and 1H2H16O) of the Atlantic marine boundary layer has been measured from 5 research vessels between 2012 and 2015. Using laser spectroscopy analysers, measurements have been carried out continuously on samples collected 10–20 meter above sea level. All the datasets have been carefully calibrated against the international VSMOW-SLAP scale following the same protocol to build a homogeneous dataset covering the Atlantic Ocean between 4°S to 63°N. In addition, standard meteorological variables have been measured continuously, including sea surface temperatures using calibrated Thermo-Salinograph for most cruises. All calibrated observations are provided with 15-minute resolution. We also provide 6-hourly data to allow easier comparisons with simulations from the isotope-enabled Global Circulation Models. In addition, backwards trajectories from the HYSPLIT model are supplied every 6-hours for the position of our measurements. PMID:28094798
A multigear protocol for sampling crayfish assemblages in Gulf of Mexico coastal streams
William R. Budnick; William E. Kelso; Susan B. Adams; Michael D. Kaller
2018-01-01
Identifying an effective protocol for sampling crayfish in streams that vary in habitat and physical/chemical characteristics has proven problematic. We evaluated an active, combined-gear (backpack electrofishing and dipnetting) sampling protocol in 20 Coastal Plain streams in Louisiana. Using generalized linear models and rarefaction curves, we evaluated environmental...
Column chromatography as a useful step in purification of diatom pigments.
Tokarek, Wiktor; Listwan, Stanisław; Pagacz, Joanna; Leśniak, Piotr; Latowski, Dariusz
2016-01-01
Fucoxanthin, diadinoxanthin and diatoxanthin are carotenoids found in brown algae and most other heterokonts. These pigments are involved in photosynthetic and photoprotective reactions, and they have many potential health benefits. They can be extracted from diatom Phaeodactylum tricornutum by sonication, extraction with chloroform : methanol and preparative thin layer chromatography. We assessed the utility of an additional column chromatography step in purification of these pigments. This novel addition to the isolation protocol increased the purity of fucoxanthin and allowed for concentration of diadinoxanthin and diatoxanthin before HPLC separation. The enhanced protocol is useful for obtaining high purity pigments for biochemical studies.
Bower, John F.; Kim, In Su; Patman, Ryan L.; Krische, Michael J.
2009-01-01
Classical protocols for carbonyl allylation, propargylation and vinylation typically rely upon the use of preformed allyl metal, allenyl metal and vinyl metal reagents, respectively, mandating stoichiometric generation of metallic byproducts. Through transfer hydrogenative C-C coupling, carbonyl addition may be achieved from the aldehyde or alcohol oxidation level in the absence of stoichiometric organometallic reagents or metallic reductants. Here, we review transfer hydrogenative methods for carbonyl addition, which encompass the first cataltyic protocols enabling direct C–H functionalization of alcohols. PMID:19040235
Plontke, Stefan K; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N
2007-01-01
Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and incorporate more of the specific processes that contribute to drug removal from the inner ear fluids. Appropriate computer models may assist in both drug and drug delivery system design and can thus accelerate the development of a rationale-based local drug delivery to the inner ear and its successful establishment in clinical practice. Copyright 2007 S. Karger AG, Basel.
Plontke, Stefan K.; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N.
2006-01-01
Hypothesis: Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Background: Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. Methods: A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. Results: For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. Conclusion: The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and incorporate more of the specific processes that contribute to drug removal from the inner ear fluids. Appropriate computer models may assist in both drug and drug delivery system design and can thus accelerate the development of a rationale-based local drug delivery to the inner ear and its successful establishment in clinical practice. PMID:17119332
TH-C-18A-08: A Management Tool for CT Dose Monitoring, Analysis, and Protocol Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Chan, F; Newman, B
2014-06-15
Purpose: To develop a customizable tool for enterprise-wide managing of CT protocols and analyzing radiation dose information of CT exams for a variety of quality control applications Methods: All clinical CT protocols implemented on the 11 CT scanners at our institution were extracted in digital format. The original protocols had been preset by our CT management team. A commercial CT dose tracking software (DoseWatch,GE healthcare,WI) was used to collect exam information (exam date, patient age etc.), scanning parameters, and radiation doses for all CT exams. We developed a Matlab-based program (MathWorks,MA) with graphic user interface which allows to analyze themore » scanning protocols with the actual dose estimates, and compare the data to national (ACR,AAPM) and internal reference values for CT quality control. Results: The CT protocol review portion of our tool allows the user to look up the scanning and image reconstruction parameters of any protocol on any of the installed CT systems among about 120 protocols per scanner. In the dose analysis tool, dose information of all CT exams (from 05/2013 to 02/2014) was stratified on a protocol level, and within a protocol down to series level, i.e. each individual exposure event. This allows numerical and graphical review of dose information of any combination of scanner models, protocols and series. The key functions of the tool include: statistics of CTDI, DLP and SSDE, dose monitoring using user-set CTDI/DLP/SSDE thresholds, look-up of any CT exam dose data, and CT protocol review. Conclusion: our inhouse CT management tool provides radiologists, technologists and administration a first-hand near real-time enterprise-wide knowledge on CT dose levels of different exam types. Medical physicists use this tool to manage CT protocols, compare and optimize dose levels across different scanner models. It provides technologists feedback on CT scanning operation, and knowledge on important dose baselines and thresholds.« less
Teratology studies in the mouse.
Marsden, Edward; Leroy, Mariline
2013-01-01
The rat is the routine species of choice as the rodent model for regulatory safety testing of xenobiotics such as medicinal products, food additives, and other chemicals. However, the rat is not always suitable for pharmacological, toxicological, immunogenic, pharmacokinetic, or even practical reasons. Under such circumstances, the mouse offers an alternative for finding a suitable rodent model acceptable to the regulatory authorities. Since all essential routes of administration are possible, the short reproductive cycle and large litter size of the mouse make it a species well adapted for use in teratology studies. Given that good quality animals, including virgin mated females, can be acquired relatively easily and inexpensively, the mouse has been used in reproductive toxicity studies for decades and study protocols are well established.
EMDataBank unified data resource for 3DEM.
Lawson, Catherine L; Patwardhan, Ardan; Baker, Matthew L; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P; Lagerstedt, Ingvar; Ludtke, Steven J; Pintilie, Grigore; Sala, Raul; Westbrook, John D; Berman, Helen M; Kleywegt, Gerard J; Chiu, Wah
2016-01-04
Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ruiz, Francisco J; Luciano, Carmen
2009-08-01
Acceptance and Commitment Therapy (ACT) is shown to be effective in relatively distant fields from the so-called psychological disorders. One of these areas is sport performance improvement. The aim of the current study is to expand the application of brief ACT protocols to improve chess-players' performance. In a previous study, a brief protocol was applied to international-level adult chess-players that was effective. The current study aims to apply an equivalent brief ACT protocol, but in this case, applied in a group format to promising young chess-players. In addition, this brief protocol is compared to a non-intervention control condition. Results show that the ACT brief protocol improved the performance in 5 out of 7 participants, and that none of the chess-players in the control condition reached the established change criterion. The differences between the conditions in chess performance were statistically significant. The results are discussed, emphasizing the replicated impact of a brief ACT protocol on the improvement of chess-players' performance.
Thermo-chemical pretreatment of rice straw for further processing for levulinic acid production.
Elumalai, Sasikumar; Agarwal, Bhumica; Sangwan, Rajender S
2016-10-01
A variety of pretreatment protocols for rice straw fiber reconstruction were evaluated under mild conditions (upto 0.2%wt. and 121°C) with the object of improving polymer susceptibility to chemical attack while preserving carbohydrate sugars for levulinic acid (LA) production. Each of the protocols tested significantly enhanced pretreatment recoveries of carbohydrate sugars and lignin, and a NaOH protocol showed the most promise, with enhanced carbohydrate preservation (upto 20% relative to the other protocols) and more effective lignin dissolution (upto 60%). Consequently, post-pretreatment fibers were evaluated for LA preparation using an existing co-solvent system consisting of HCl and THF, in addition supplementation of DMSO was attempted, in order to improve final product recovery. In contrast to pretreatment response, H2SO4 protocol fibers yielded highest LA conc. (21%wt. with 36% carbohydrate conversion efficiency) under the modest reaction conditions. Apparent spectroscopic analysis witnessed for fiber destruction and delocalization of inherent constituents during pretreatment protocols. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ingels, Frank; Owens, John; Daniel, Steven
1989-01-01
The protocol definition and terminal hardware for the modified free access protocol, a communications protocol similar to Ethernet, are developed. A MFA protocol simulator and a CSMA/CD math model are also developed. The protocol is tailored to communication systems where the total traffic may be divided into scheduled traffic and Poisson traffic. The scheduled traffic should occur on a periodic basis but may occur after a given event such as a request for data from a large number of stations. The Poisson traffic will include alarms and other random traffic. The purpose of the protocol is to guarantee that scheduled packets will be delivered without collision. This is required in many control and data collection systems. The protocol uses standard Ethernet hardware and software requiring minimum modifications to an existing system. The modification to the protocol only affects the Ethernet transmission privileges and does not effect the Ethernet receiver.
Practical State Machine Replication with Confidentiality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Sisi; Zhang, Haibin
2016-01-01
We study how to enable arbitrary randomized algorithms in Byzantine fault-tolerant (BFT) settings. We formalize a randomized BFT protocol and provide a simple and efficient construction that can be built on any existing BFT protocols while adding practically no overhead. We go one step further to revisit a confidential BFT protocol (Yin et al., SOSP '03). We show that their scheme is potentially susceptible to safety and confidentiality attacks. We then present a new protocol that is secure in the stronger model we formalize, by extending the idea of a randomized BFT protocol. Our protocol uses only efficient symmetric cryptography,more » while Yin et al.'s uses costly threshold signatures. We implemented and evaluated our protocols on microbenchmarks and real-world use cases. We show that our randomized BFT protocol is as efficient as conventional BFT protocols, and our confidential BFT protocol is two to three orders of magnitude faster than Yin et al.'s, which is less secure than ours.« less
Scalable Online Network Modeling and Simulation
2005-08-01
ONLINE NETWORK MODELING AND SIMULATION 6. AUTHOR(S) Boleslaw Szymanski , Shivkumar Kalyanaraman, Biplab Sikdar and Christopher Carothers 5...performance for a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature ...a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature interactions
Odaka, Mizuho; Minakata, Kenji; Toyokuni, Hideaki; Yamazaki, Kazuhiro; Yonezawa, Atsushi; Sakata, Ryuzo; Matsubara, Kazuo
2015-08-01
This study aimed to develop and assess the effectiveness of a protocol for antibiotic prophylaxis based on preoperative kidney function in patients undergoing open heart surgery. We established a protocol for antibiotic prophylaxis based on preoperative kidney function in patients undergoing open heart surgery. This novel protocol was assessed by comparing patients undergoing open heart surgery before (control group; n = 30) and after its implementation (protocol group; n = 31) at Kyoto University Hospital between July 2012 and January 2013. Surgical site infections (SSIs) were observed in 4 control group patients (13.3 %), whereas no SSIs were observed in the protocol group patients (P < 0.05). The total duration of antibiotic use decreased significantly from 80.7 ± 17.6 h (mean ± SD) in the control group to 55.5 ± 14.9 h in the protocol group (P < 0.05). Similarly, introduction of the protocol significantly decreased the total antibiotic dose used in the perioperative period (P < 0.05). Furthermore, antibiotic regimens were changed under suspicion of infection in 5 of 30 control group patients, whereas none of the protocol group patients required this additional change in the antibiotic regimen (P < 0.05). Our novel antibiotic prophylaxis protocol based on preoperative kidney function effectively prevents SSIs in patients undergoing open heart surgery.
Herts, Brian R; Baker, Mark E; Obuchowski, Nancy; Primak, Andrew; Schneider, Erika; Rhana, Harpreet; Dong, Frank
2013-06-01
The purpose of this article is to determine the decrease in volume CT dose index (CTDI(vol)) and dose-length product (DLP) achieved by switching from fixed quality reference tube current protocols with automatic tube current modulation to protocols adjusting the quality reference tube current, slice collimation, and peak kilovoltage according to patient weight. All adult patients who underwent CT examinations of the abdomen or abdomen and pelvis during 2010 using weight-based protocols who also underwent a CT examination in 2008 or 2009 using fixed quality reference tube current protocols were identified from the radiology information system. Protocol pages were electronically retrieved, and the CT model, examination date, scan protocol, CTDI(vol), and DLP were extracted from the DICOM header or by optical character recognition. There were 15,779 scans with dose records for 2700 patients. Changes in CTDI(vol) and DLP were compared only between examinations of the same patient and same CT system model for examinations performed in 2008 or 2009 and those performed in 2010. The final analysis consisted of 1117 comparisons in 1057 patients, and 1209 comparisons in 988 patients for CTDI(vol) and DLP, respectively. The change to a weight-based protocol resulted in a statistically significant reduction in CTDI(vol) and DLP on three MDCT system models (p < 0.001). The largest average CTDI(vol) decrease was 13.9%, and the largest average DLP decrease was 16.1% on a 64-MDCT system. Both the CTDI(vol) and DLP decreased the most for patients who weighed less than 250 lb (112.5 kg). Adjusting the CT protocol by selecting parameters according to patient weight is a viable method for reducing CT radiation dose. The largest reductions occurred in the patients weighing less than 250 lb.
Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg
2015-01-01
OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.
DoS detection in IEEE 802.11 with the presence of hidden nodes
Soryal, Joseph; Liu, Xijie; Saadawi, Tarek
2013-01-01
The paper presents a novel technique to detect Denial of Service (DoS) attacks applied by misbehaving nodes in wireless networks with the presence of hidden nodes employing the widely used IEEE 802.11 Distributed Coordination Function (DCF) protocols described in the IEEE standard [1]. Attacker nodes alter the IEEE 802.11 DCF firmware to illicitly capture the channel via elevating the probability of the average number of packets transmitted successfully using up the bandwidth share of the innocent nodes that follow the protocol standards. We obtained the theoretical network throughput by solving two-dimensional Markov Chain model as described by Bianchi [2], and Liu and Saadawi [3] to determine the channel capacity. We validated the results obtained via the theoretical computations with the results obtained by OPNET simulator [4] to define the baseline for the average attainable throughput in the channel under standard conditions where all nodes follow the standards. The main goal of the DoS attacker is to prevent the innocent nodes from accessing the channel and by capturing the channel’s bandwidth. In addition, the attacker strives to appear as an innocent node that follows the standards. The protocol resides in every node to enable each node to police other nodes in its immediate wireless coverage area. All innocent nodes are able to detect and identify the DoS attacker in its wireless coverage area. We applied the protocol to two Physical Layer technologies: Direct Sequence Spread Spectrum (DSSS) and Frequency Hopping Spread Spectrum (FHSS) and the results are presented to validate the algorithm. PMID:25685510
DoS detection in IEEE 802.11 with the presence of hidden nodes.
Soryal, Joseph; Liu, Xijie; Saadawi, Tarek
2014-07-01
The paper presents a novel technique to detect Denial of Service (DoS) attacks applied by misbehaving nodes in wireless networks with the presence of hidden nodes employing the widely used IEEE 802.11 Distributed Coordination Function (DCF) protocols described in the IEEE standard [1]. Attacker nodes alter the IEEE 802.11 DCF firmware to illicitly capture the channel via elevating the probability of the average number of packets transmitted successfully using up the bandwidth share of the innocent nodes that follow the protocol standards. We obtained the theoretical network throughput by solving two-dimensional Markov Chain model as described by Bianchi [2], and Liu and Saadawi [3] to determine the channel capacity. We validated the results obtained via the theoretical computations with the results obtained by OPNET simulator [4] to define the baseline for the average attainable throughput in the channel under standard conditions where all nodes follow the standards. The main goal of the DoS attacker is to prevent the innocent nodes from accessing the channel and by capturing the channel's bandwidth. In addition, the attacker strives to appear as an innocent node that follows the standards. The protocol resides in every node to enable each node to police other nodes in its immediate wireless coverage area. All innocent nodes are able to detect and identify the DoS attacker in its wireless coverage area. We applied the protocol to two Physical Layer technologies: Direct Sequence Spread Spectrum (DSSS) and Frequency Hopping Spread Spectrum (FHSS) and the results are presented to validate the algorithm.
Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus
2018-06-01
Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.