Sample records for optimized electrofusion-based protocol

  1. Comparison of electro-fusion and intracytoplasmic nuclear injection methods in pig cloning.

    PubMed

    Kurome, Mayuko; Fujimura, Tatsuya; Murakami, Hiroshi; Takahagi, Yoichi; Wako, Naohiro; Ochiai, Takashi; Miyazaki, Koji; Nagashima, Hiroshi

    2003-01-01

    This paper methodologically compares the electro-fusion (EF) and intracytoplasmic injection (ICI) methods, as well as simultaneous fusion/activation (SA) and delayed activation (DA), in somatic nuclear transfer in pigs using fetal fibroblast cells. Comparison of the remodeling pattern of donor nuclei after nuclear transfer by ICI or EF showed that a high rate (80-100%) of premature chromosome condensation occurred in both cases whether or not Ca2+ was present in the fusion medium. Formation of pseudo-pronuclei tended to be lower for nuclear transfer performed by the ICI method (65% vs. 85-97%, p < 0.05). In vitro developmental potential of nuclear transfer embryos reconstructed with IVM oocytes using the EF method was higher than that of those produced by the ICI method (blastocyst formation: 19 vs. 5%, p < 0.05), and it was not improved using in vivo-matured oocytes as recipient cytoplasts. Embryos produced using SA protocol developed to blastocysts with the same degree of efficiency as those produced under the DA protocol (11 vs. 12%). Use of the EF method in conjunction with SA was shown to be an efficient method for producing cloned pigs based on producing a cloned normal pig fetus. However, subtle differences in nuclear remodeling patterns between the SA and DA protocols may imply variations in their nuclear reprogramming efficiency.

  2. Screening somatic cell nuclear transfer parameters for generation of transgenic cloned cattle with intragenomic integration of additional gene copies that encode bovine adipocyte-type fatty acid-binding protein (A-FABP).

    PubMed

    Guo, Yong; Li, Hejuan; Wang, Ying; Yan, Xingrong; Sheng, Xihui; Chang, Di; Qi, Xiaolong; Wang, Xiangguo; Liu, Yunhai; Li, Junya; Ni, Hemin

    2017-02-01

    Somatic cell nuclear transfer (SCNT) is frequently used to produce transgenic cloned livestock, but it is still associated with low success rates. To our knowledge, we are the first to report successful production of transgenic cattle that overexpress bovine adipocyte-type fatty acid binding proteins (A-FABPs) with the aid of SCNT. Intragenomic integration of additional A-FABP gene copies has been found to be positively correlated with the intramuscular fat content in different farm livestock species. First, we optimized the cloning parameters to produce bovine embryos integrated with A-FABP by SCNT, such as applied voltage field strength and pulse duration for electrofusion, morphology and size of donor cells, and number of donor cells passages. Then, bovine fibroblast cells from Qinchuan cattle were transfected with A-FABP and used as donor cells for SCNT. Hybrids of Simmental and Luxi local cattle were selected as the recipient females for A-FABP transgenic SCNT-derived embryos. The results showed that a field strength of 2.5 kV/cm with two 10-μs duration electrical pulses was ideal for electrofusion, and 4-6th generation circular smooth type donor cells with diameters of 15-25 μm were optimal for producing transgenic bovine embryos by SCNT, and resulted in higher fusion (80%), cleavage (73%), and blastocyst (27%) rates. In addition, we obtained two transgenic cloned calves that expressed additional bovine A-FABP gene copies, as detected by PCR-amplified cDNA sequencing. We proposed a set of optimal protocols to produce transgenic SCNT-derived cattle with intragenomic integration of ectopic A-FABP-inherited exon sequences.

  3. The systematic study of the electroporation and electrofusion of B16-F1 and CHO cells in isotonic and hypotonic buffer.

    PubMed

    Usaj, Marko; Kanduser, Masa

    2012-09-01

    The fusogenic state of the cell membrane can be induced by external electric field. When two fusogenic membranes are in close contact, cell fusion takes place. An appropriate hypotonic treatment of cells before the application of electric pulses significantly improves electrofusion efficiency. How hypotonic treatment improves electrofusion is still not known in detail. Our results indicate that at given induced transmembrane potential electroporation was not affected by buffer osmolarity. In contrast to electroporation, cells' response to hypotonic treatment significantly affects their electrofusion. High fusion yield was observed when B16-F1 cells were used; this cell line in hypotonic buffer resulted in 41 ± 9 % yield, while in isotonic buffer 32 ± 11 % yield was observed. Based on our knowledge, these fusion yields determined in situ by dual-color fluorescence microscopy are among the highest in electrofusion research field. The use of hypotonic buffer was more crucial for electrofusion of CHO cells; the fusion yield increased from below 1 % in isotonic buffer to 10 ± 4 % in hypotonic buffer. Since the same degree of cell permeabilization was achieved in both buffers, these results indicate that hypotonic treatment significantly improves fusion yield. The effect could be attributed to improved physical contact of cell membranes or to enhanced fusogenic state of the cell membrane itself.

  4. Lightning-triggered electroporation and electrofusion as possible contributors to natural horizontal gene transfer.

    PubMed

    Kotnik, Tadej

    2013-09-01

    Phylogenetic studies show that horizontal gene transfer (HGT) is a significant contributor to genetic variability of prokaryotes, and was perhaps even more abundant during the early evolution. Hitherto, research of natural HGT has mainly focused on three mechanisms of DNA transfer: conjugation, natural competence, and viral transduction. This paper discusses the feasibility of a fourth such mechanism--cell electroporation and/or electrofusion triggered by atmospheric electrostatic discharges (lightnings). A description of electroporation as a phenomenon is followed by a review of experimental evidence that electroporation of prokaryotes in aqueous environments can result in release of non-denatured DNA, as well as uptake of DNA from the surroundings and transformation. Similarly, a description of electrofusion is followed by a review of experiments showing that prokaryotes devoid of cell wall can electrofuse into hybrids expressing the genes of their both precursors. Under sufficiently fine-tuned conditions, electroporation and electrofusion are efficient tools for artificial transformation and hybridization, respectively, but the quantitative analysis developed here shows that conditions for electroporation-based DNA release, DNA uptake and transformation, as well as for electrofusion are also present in many natural aqueous environments exposed to lightnings. Electroporation is thus a plausible contributor to natural HGT among prokaryotes, and could have been particularly important during the early evolution, when the other mechanisms might have been scarcer or nonexistent. In modern prokaryotes, natural absence of the cell wall is rare, but it is reasonable to assume that the wall has formed during a certain stage of evolution, and at least prior to this, electrofusion could also have contributed to natural HGT. The concluding section outlines several guidelines for assessment of the feasibility of lightning-triggered HGT. © 2013 Elsevier B.V. All rights reserved.

  5. The Influence of Vesicle Shape and Medium Conductivity on Possible Electrofusion under a Pulsed Electric Field

    PubMed Central

    Liu, Linying; Mao, Zheng; Zhang, Jianhua; Liu, Na; Liu, Qing Huo

    2016-01-01

    The effects of electric field on lipid membrane and cells have been extensively studied in the last decades. The phenomena of electroporation and electrofusion are of particular interest due to their wide use in cell biology and biotechnology. However, numerical studies on the electrofusion of cells (or vesicles) with different deformed shapes are still rare. Vesicle, being of cell size, can be treated as a simple model of cell to investigate the behaviors of cell in electric field. Based on the finite element method, we investigate the effect of vesicle shape on electrofusion of contact vesicles in various medium conditions. The transmembrane voltage (TMV) and pore density induced by a pulsed field are examined to analyze the possibility of vesicle fusion. In two different medium conditions, the prolate shape is observed to have selective electroporation at the contact area of vesicles when the exterior conductivity is smaller than the interior one; selective electroporation is more inclined to be found at the poles of the oblate vesicles when the exterior conductivity is larger than the interior one. Furthermore, we find that when the exterior conductivity is lower than the internal conductivity, the pulse can induce a selective electroporation at the contact area between two vesicles regardless of the vesicle shape. Both of these two findings have important practical applications in guiding electrofusion experiments. PMID:27391692

  6. Free flow electrophoresis in space shuttle program (biotex)

    NASA Astrophysics Data System (ADS)

    Hannig, Kurt; Bauer, Johann

    In the space shuttle program free flow electrophoresis will be applied for separation of proteins, biopolymers and cells. Proteins are to be separated according to the ``Feldsprung-Gradienten'' procedure by Prof. H. Wagner, University of Saarbruecken, biopolymers are to be separated by the isotachophoresis technique by Prof. Schmitz, University of Muenster and we intend to separate cells in order to increase the efficiency of recovery of hybrid cells after electrofusion performed under microgravity in collaboration with Prof. U. Zimmermann, University of Wuerzburg. There are supposed two ways for reaching this goal: Enrichment of cells before electrofusion may enhance the probability that the cells of interest are immortalized. Separation of cells after electrofusion may help to clone the hybrid cells of interest. Under microgravity, the combination of improved electrophoresis with higher electrofusion rates may provide new possibilities for immortalization of cells. This may be a new way to obtain cellular products, which are physiologically glycosylated.

  7. Cell separation and electrofusion in space

    NASA Technical Reports Server (NTRS)

    Morrison, D. R.; Hofmann, G. A.

    1990-01-01

    In microgravity, free-fluid electrophoretic methods for separating living cells and proteins are improved significantly by the absence of gravity-driven phenomena. Cell fusion, culture, and other bioprocessing steps are being investigated to understand the limits of earth-based processing. A multistep space bioprocess is described that includes electrophoretic separation of human target cells, single-cell manipulations using receptor-specific antibodies, electrofusion to produce immortal hybridomas, gentle suspension culture, and monoclonal antibody recovery using continuous-flow electrophoresis or recirculating isoelectric focusing. Improvements in several key steps already have been demonstrated by space experiments, and others will be studied on Space Station Freedom.

  8. Intelligent screening of electrofusion-polyethylene joints based on a thermal NDT method

    NASA Astrophysics Data System (ADS)

    Doaei, Marjan; Tavallali, M. Sadegh

    2018-05-01

    The combinations of infrared thermal images and artificial intelligence methods have opened new avenues for pushing the boundaries of available testing methods. Hence, in the current study, a novel thermal non-destructive testing method for polyethylene electrofusion joints was combined with k-means clustering algorithms as an intelligent screening tool. The experiments focused on ovality of pipes in the coupler, as well as misalignment of pipes-couplers in 25 mm diameter joints. The temperature responses of each joint to an internal heat pulse were recorded by an IR thermal camera, and further processed to identify the faulty joints. The results represented clustering accuracy of 92%, as well as more than 90% abnormality detection capabilities.

  9. Effects of recipient oocyte age and interval from fusion to activation on development of buffalo (Bubalus bubalis) nuclear transfer embryos derived from fetal fibroblasts.

    PubMed

    Lu, F; Jiang, J; Li, N; Zhang, S; Sun, H; Luo, C; Wei, Y; Shi, D

    2011-09-15

    The objective was to investigate the effect of recipient oocyte age and the interval from activation to fusion on developmental competence of buffalo nuclear transfer (NT) embryos. Buffalo oocytes matured in vitro for 22 h were enucleated by micromanipulation under the spindle view system, and a fetal fibroblast (pretreated with 0.1 μg/mL aphidicolin for 24 h, followed by culture for 48 h in 0.5% fetal bovine serum) was introduced into the enucleated oocyte, followed by electrofusion. Both oocytes and NT embryos were activated by exposure to 5 μM ionomycin for 5 min, followed by culture in 2 mM 6-dimethyl-aminopurine for 3 h. When oocytes matured in vitro for 28, 29, 30, 31, or 32 h were activated, more oocytes matured in vitro for 30 h developed into blastocysts in comparison with oocytes matured in vitro for 32 h (31.3 vs 19.9%, P < 0.05). When electrofusion was induced 27 h after the onset of oocyte maturation, the cleavage rate (78.0%) was higher than that of electrofusion induced at 28 h (67.2%, P < 0.05), and the blastocyst yield (18.1%) was higher (P < 0.05) than that of electrofusion induced at 25 or 26 h (7.4 and 8.5%, respectively). A higher proportion of NT embryos activated at 3 h after electrofusion developed to the blastocyst stage (18.6%) in comparison with NT embryos activated at 1 h (6.0%), 2 h (8.3%), or 4 h (10.6%) after fusion (P < 0.05). No recipient was pregnant 60 d after transfer of blastocysts developed from NT embryos activated at 1 h (0/8), 2 h (0/10), or 4 h (0/9) after fusion. However, 3 of 16 recipients were pregnant following transfer of blastocysts developed from the NT embryos activated at 3 h after fusion, and two of these recipients maintained pregnancy to term. We concluded that the developmental potential of buffalo NT embryos was related to recipient oocyte age and the interval from fusion to activation. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Time-Correlated Single-Photon Counting Fluorescence Imaging of Lipid Domains In Raft-Mimicking Giant Unilamellar Vesicles

    NASA Astrophysics Data System (ADS)

    Clarke, James; Cheng, Kwan; Shindell, Orrin; Wang, Exing

    We have designed and constructed a high-throughput electrofusion chamber and an incubator to fabricate Giant Unilamellar Vesicles (GUVs) consisting of high-melting lipids, low-melting lipids, cholesterol and both ordered and disordered phase sensitive fluorescent probes (DiIC12, dehydroergosterol and BODIPY-Cholesterol). GUVs were formed in a 3 stage pulse sequence electrofusion process with voltages ranging from 50mVpp to 2.2Vpp and frequencies from 5Hz to 10Hz. Steady state and time-correlated single-photon counting (TCSPC) fluorescence lifetime (FLIM) based confocal and/or multi-photon microscopic techniques were used to characterize phase separated lipid domains in GUVs. Confocal imaging measures the probe concentration and the chemical environment of the system. TCSPC techniques determine the chemical environment through the perturbation of fluorescent lifetimes of the probes in the system. The above techniques will be applied to investigate the protein-lipid interactions involving domain formation. Specifically, the mechanisms governing lipid domain formations in the above systems that mimic the lipid rafts in cells will be explored. Murchison Fellowship at Trinity University.

  11. Mouse cloning and somatic cell reprogramming using electrofused blastomeres.

    PubMed

    Riaz, Amjad; Zhao, Xiaoyang; Dai, Xiangpeng; Li, Wei; Liu, Lei; Wan, Haifeng; Yu, Yang; Wang, Liu; Zhou, Qi

    2011-05-01

    Mouse cloning from fertilized eggs can assist development of approaches for the production of "genetically tailored" human embryonic stem (ES) cell lines that are not constrained by the limitations of oocyte availability. However, to date only zygotes have been successfully used as recipients of nuclei from terminally differentiated somatic cell donors leading to ES cell lines. In fertility clinics, embryos of advanced embryonic stages are usually stored for future use, but their ability to support the derivation of ES cell lines via somatic nuclear transfer has not yet been proved. Here, we report that two-cell stage electrofused mouse embryos, arrested in mitosis, can support developmental reprogramming of nuclei from donor cells ranging from blastomeres to somatic cells. Live, full-term cloned pups from embryonic donors, as well as pluripotent ES cell lines from embryonic or somatic donors, were successfully generated from these reconstructed embryos. Advanced stage pre-implantation embryos were unable to develop normally to term after electrofusion and transfer of a somatic cell nucleus, indicating that discarded pre-implantation human embryos could be an important resource for research that minimizes the ethical concerns for human therapeutic cloning. Our approach provides an attractive and practical alternative to therapeutic cloning using donated oocytes for the generation of patient-specific human ES cell lines.

  12. Comprehensive data model to characterize long term integrity and process parameter interactions governing the butt fusion process.

    DOT National Transportation Integrated Search

    2012-12-01

    The overall integrity of the plastic piping system is predicated on the long term strength : of its weakest link which often occurs at fitting and joint interfaces, e.g. electrofusion, : mechanical, heat fusion, etc. In order to maximize the overall ...

  13. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks.

    PubMed

    Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing

    2017-07-19

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.

  14. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  15. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks

    PubMed Central

    Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing

    2017-01-01

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962

  16. Preparation of triple-negative breast cancer vaccine through electrofusion with day-3 dendritic cells.

    PubMed

    Zhang, Peng; Yi, Shuhong; Li, Xi; Liu, Ruilei; Jiang, Hua; Huang, Zenan; Liu, Yu; Wu, Juekun; Huang, Yong

    2014-01-01

    Dendritic cells (DCs) are professional antigen-presenting cells (APCs) in human immune system. DC-based tumor vaccine has met with some success in specific malignancies, inclusive of breast cancer. In this study, we electrofused MDA-MB-231 breast cancer cell line with day-3 DCs derived from peripheral blood monocytes, and explored the biological characteristics of fusion vaccine and its anti-tumor effects in vitro. Day-3 mature DCs were generated from day-2 immature DCs by adding cocktails composed of TNF-α, IL-1β, IL-6 and PEG2. Day-3 mature DCs were identified and electofused with breast cancer cells to generate fusion vaccine. Phenotype of fusion cells were identified by fluorescence microscope and flow cytometer. The fusion vaccine was evaluated for T cell proliferation, secretion of IL-12 and IFN-γ, and induction of tumor-specific CTL response. Despite differences in morphology, day-3 and day-7 DC expressed similar surface markers. The secretion of IL-12 and IFN-γ in fusion vaccine group was much higher than that in the control group. Compared with control group, DC-tumor fusion vaccine could better stimulate the proliferation of allogeneic T lymphocytes and kill more breast cancer cells (MDA-MB-231) in vitro. Day-3 DCs had the same function as the day-7 DCs, but with a shorter culture period. Our findings suggested that day-3 DCs fused with whole apoptotic breast cancer cells could elicit effective specific antitumor T cell responses in vitro and may be developed into a prospective candidate for adoptivet immunotherapy.

  17. What is the optimal way to prepare a Bell state using measurement and feedback?

    NASA Astrophysics Data System (ADS)

    Martin, Leigh; Sayrafi, Mahrud; Whaley, K. Birgitta

    2017-12-01

    Recent work has shown that the use of quantum feedback can significantly enhance both the speed and success rate of measurement-based remote entanglement generation, but it is generally unknown what feedback protocols are optimal for these tasks. Here we consider two common measurements that are capable of projecting into pairwise entangled states, namely half- and full-parity measurements of two qubits, and determine in each case a globally optimal protocol for generation of entanglement. For the half-parity measurement, we rederive a previously described protocol using more general methods and prove that it is globally optimal for several figures of merit, including maximal concurrence or fidelity and minimal time to reach a specified concurrence or fidelity. For the full-parity measurement, we derive a protocol for rapid entanglement generation related to that of (Hill, Ralph, Phys. Rev. A 77, 014305), and then map the dynamics of the concurrence of the state to the Bloch vector length of an effective qubit. This mapping allows us to prove several optimality results for feedback protocols with full-parity measurements. We further show that our full-parity protocol transfers entanglement optimally from one qubit to the other amongst all measurement-based schemes. The methods developed here will be useful for deriving feedback protocols and determining their optimality properties in many other quantum systems subject to measurement and unitary operations.

  18. Improving the efficiency of single and multiple teleportation protocols based on the direct use of partially entangled states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br

    We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less

  19. Automatic CT simulation optimization for radiation therapy: A general strategy.

    PubMed

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.

  20. Enhanced Cultivation Of Stimulated Murine B Cells

    NASA Technical Reports Server (NTRS)

    Sammons, David W.

    1994-01-01

    Method of in vitro cultivation of large numbers of stimulated murine B lymphocytes. Cells electrofused with other cells to produce hybridomas and monoclonal antibodies. Offers several advantages: polyclonally stimulated B-cell blasts cultivated for as long as 14 days, hybridomas created throughout culture period, yield of hybridomas increases during cultivation, and possible to expand polyclonally in vitro number of B cells specific for antigenic determinants first recognized in vivo.

  1. Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.

    PubMed

    Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier

    2015-05-01

    Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.

  2. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  3. Optimized protocol for quantitative multiple reaction monitoring-based proteomic analysis of formalin-fixed, paraffin embedded tissues

    PubMed Central

    Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.

    2016-01-01

    Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933

  4. An intelligent case-adjustment algorithm for the automated design of population-based quality auditing protocols.

    PubMed

    Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A

    2004-01-01

    We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.

  5. Interspecific somatic hybridization between lettuce (Lactuca sativa) and wild species L. virosa.

    PubMed

    Matsumoto, E

    1991-02-01

    Somatic hybrids between cultivated lettuce (Lactuca sativa) and a wild species L. virosa were produced by protoplast electrofusion. Hybrid selection was based on inactivation of L. sativa with 20mM iodoacetamide for 15 min, and the inability of L. virosa protoplasts to divide in the culture conditions used. Protoplasts were cultured in agarose beads in a revised MS media. In all 71 calli were formed and 21 of them differentiated shoots on LS medium containing 0.1mg/l NAA and 0.2mg/l BA. Most regenerated plants exhibited intermediate morphology. These plants were confirmed as hybrids by isoenzyme analysis. The majority of somatic hybrids had 2n=4x=36 chromosomes, and had more vigorous growth than either parent. Hybrids had normal flower morphology, but all were sterile.

  6. Optimizing Equivalence-Based Instruction: Effects of Training Protocols on Equivalence Class Formation

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Wright, Nicole A.; Fields, Lanny

    2015-01-01

    Two experiments evaluated the effects of the simple-to-complex and simultaneous training protocols on the formation of academically relevant equivalence classes. The simple-to-complex protocol intersperses derived relations probes with training baseline relations. The simultaneous protocol conducts all training trials and test trials in separate…

  7. A neural networks-based hybrid routing protocol for wireless mesh networks.

    PubMed

    Kojić, Nenad; Reljin, Irini; Reljin, Branimir

    2012-01-01

    The networking infrastructure of wireless mesh networks (WMNs) is decentralized and relatively simple, but they can display reliable functioning performance while having good redundancy. WMNs provide Internet access for fixed and mobile wireless devices. Both in urban and rural areas they provide users with high-bandwidth networks over a specific coverage area. The main problems affecting these networks are changes in network topology and link quality. In order to provide regular functioning, the routing protocol has the main influence in WMN implementations. In this paper we suggest a new routing protocol for WMN, based on good results of a proactive and reactive routing protocol, and for that reason it can be classified as a hybrid routing protocol. The proposed solution should avoid flooding and creating the new routing metric. We suggest the use of artificial logic-i.e., neural networks (NNs). This protocol is based on mobile agent technologies controlled by a Hopfield neural network. In addition to this, our new routing metric is based on multicriteria optimization in order to minimize delay and blocking probability (rejected packets or their retransmission). The routing protocol observes real network parameters and real network environments. As a result of artificial logic intelligence, the proposed routing protocol should maximize usage of network resources and optimize network performance.

  8. A Neural Networks-Based Hybrid Routing Protocol for Wireless Mesh Networks

    PubMed Central

    Kojić, Nenad; Reljin, Irini; Reljin, Branimir

    2012-01-01

    The networking infrastructure of wireless mesh networks (WMNs) is decentralized and relatively simple, but they can display reliable functioning performance while having good redundancy. WMNs provide Internet access for fixed and mobile wireless devices. Both in urban and rural areas they provide users with high-bandwidth networks over a specific coverage area. The main problems affecting these networks are changes in network topology and link quality. In order to provide regular functioning, the routing protocol has the main influence in WMN implementations. In this paper we suggest a new routing protocol for WMN, based on good results of a proactive and reactive routing protocol, and for that reason it can be classified as a hybrid routing protocol. The proposed solution should avoid flooding and creating the new routing metric. We suggest the use of artificial logic—i.e., neural networks (NNs). This protocol is based on mobile agent technologies controlled by a Hopfield neural network. In addition to this, our new routing metric is based on multicriteria optimization in order to minimize delay and blocking probability (rejected packets or their retransmission). The routing protocol observes real network parameters and real network environments. As a result of artificial logic intelligence, the proposed routing protocol should maximize usage of network resources and optimize network performance. PMID:22969360

  9. Optimal approach to quantum communication using dynamic programming.

    PubMed

    Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D

    2007-10-30

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states.

  10. Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks

    PubMed Central

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed. PMID:22346584

  11. Self-configuration and self-optimization process in heterogeneous wireless networks.

    PubMed

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network's scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed.

  12. Analytical approach to cross-layer protocol optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in terms of the concatenated protocol parameters. New source-to-destination routes are sought that optimize cross-layer interdependencies to achieve the "best available" performance in the WSN. The protocol design, modified from a known reactive protocol, adapts the achievable performance to the transient network conditions and resource levels. Control of network behavior is realized through the conditional rates of the MVPPs. Optimal cross-layer protocol parameters are determined by stochastic dynamic programming conditions derived from models of transient packetized sensor data flows. Moreover, the defining conditions for WSN configurations, grouping sensor nodes into clusters and establishing data aggregation at processing nodes within those clusters, lead to computationally tractable solutions to the stochastic differential equations that describe network dynamics. Closed-form solution characteristics provide an alternative to the "directed diffusion" methods for resource-efficient WSN protocols published previously by other researchers. Performance verification of the resulting cross-layer designs is found by embedding the optimality conditions for the protocols in actual WSN scenarios replicated in a wireless network simulation environment. Performance tradeoffs among protocol parameters remain for a sequel to the paper.

  13. Preimplantation diagnosis of repeated miscarriage due to chromosomal translocations using metaphase chromosomes of a blastomere biopsied from 4- to 6-cell-stage embryos.

    PubMed

    Tanaka, Atsushi; Nagayoshi, Motoi; Awata, Shoichiro; Mawatari, Yoshifumi; Tanaka, Izumi; Kusunoki, Hiroshi

    2004-01-01

    To evaluate the safety and accuracy of karyotyping the blastomere chromosomes at metaphase in the natural cell cycle for preimplantation diagnosis. A pilot study. A private infertility clinic and a university laboratory. Eleven patients undergoing IVF and preimplantation diagnosis. Intact human embryos at the 4- to 6-cell stage and human-mouse heterokaryons were cultured and checked hourly for disappearance of the nuclear envelope. After it disappeared, the metaphase chromosomes were analyzed by fluorescence in situ hybridization. Percentage of analyzable metaphase plates and safety and accuracy of the method. The success rate of electrofusion to form human-mouse heterokaryons was 87.1% (27/31), and analyzable chromosomes were obtained from 77.4% (24/31) of the heterokaryons. On the other hand, disappearance of the nuclear envelope occurred in 89.5% (17/19) of the human embryos and it began earlier than that in the heterokaryons. Analyzable chromosomes were obtained and their translocation sites were identified in all blastomeres biopsied from the 17 embryos. After the biopsy, 67.0% of the embryos could develop to the blastocyst stage. The natural cell cycle method reported herein requires frequent observation, but it is safe, with no artificial effects on the chromosomes and without loss of or damage to blastomeres, which occurred with the electrofusion method. Using the natural cell cycle method, we could perform preimplantation diagnosis with nearly 100% accuracy.

  14. Cluster Size Optimization in Sensor Networks with Decentralized Cluster-Based Protocols

    PubMed Central

    Amini, Navid; Vahdatpour, Alireza; Xu, Wenyao; Gerla, Mario; Sarrafzadeh, Majid

    2011-01-01

    Network lifetime and energy-efficiency are viewed as the dominating considerations in designing cluster-based communication protocols for wireless sensor networks. This paper analytically provides the optimal cluster size that minimizes the total energy expenditure in such networks, where all sensors communicate data through their elected cluster heads to the base station in a decentralized fashion. LEACH, LEACH-Coverage, and DBS comprise three cluster-based protocols investigated in this paper that do not require any centralized support from a certain node. The analytical outcomes are given in the form of closed-form expressions for various widely-used network configurations. Extensive simulations on different networks are used to confirm the expectations based on the analytical results. To obtain a thorough understanding of the results, cluster number variability problem is identified and inspected from the energy consumption point of view. PMID:22267882

  15. Modeling and Simulation of a Novel Relay Node Based Secure Routing Protocol Using Multiple Mobile Sink for Wireless Sensor Networks.

    PubMed

    Perumal, Madhumathy; Dhandapani, Sivakumar

    2015-01-01

    Data gathering and optimal path selection for wireless sensor networks (WSN) using existing protocols result in collision. Increase in collision further increases the possibility of packet drop. Thus there is a necessity to eliminate collision during data aggregation. Increasing the efficiency is the need of the hour with maximum security. This paper is an effort to come up with a reliable and energy efficient WSN routing and secure protocol with minimum delay. This technique is named as relay node based secure routing protocol for multiple mobile sink (RSRPMS). This protocol finds the rendezvous point for optimal transmission of data using a "splitting tree" technique in tree-shaped network topology and then to determine all the subsequent positions of a sink the "Biased Random Walk" model is used. In case of an event, the sink gathers the data from all sources, when they are in the sensing range of rendezvous point. Otherwise relay node is selected from its neighbor to transfer packets from rendezvous point to sink. A symmetric key cryptography is used for secure transmission. The proposed relay node based secure routing protocol for multiple mobile sink (RSRPMS) is experimented and simulation results are compared with Intelligent Agent-Based Routing (IAR) protocol to prove that there is increase in the network lifetime compared with other routing protocols.

  16. A Power-Optimized Cooperative MAC Protocol for Lifetime Extension in Wireless Sensor Networks.

    PubMed

    Liu, Kai; Wu, Shan; Huang, Bo; Liu, Feng; Xu, Zhen

    2016-10-01

    In wireless sensor networks, in order to satisfy the requirement of long working time of energy-limited nodes, we need to design an energy-efficient and lifetime-extended medium access control (MAC) protocol. In this paper, a node cooperation mechanism that one or multiple nodes with higher channel gain and sufficient residual energy help a sender relay its data packets to its recipient is employed to achieve this objective. We first propose a transmission power optimization algorithm to prolong network lifetime by optimizing the transmission powers of the sender and its cooperative nodes to maximize their minimum residual energy after their data packet transmissions. Based on it, we propose a corresponding power-optimized cooperative MAC protocol. A cooperative node contention mechanism is designed to ensure that the sender can effectively select a group of cooperative nodes with the lowest energy consumption and the best channel quality for cooperative transmissions, thus further improving the energy efficiency. Simulation results show that compared to typical MAC protocol with direct transmissions and energy-efficient cooperative MAC protocol, the proposed cooperative MAC protocol can efficiently improve the energy efficiency and extend the network lifetime.

  17. A Power-Optimized Cooperative MAC Protocol for Lifetime Extension in Wireless Sensor Networks

    PubMed Central

    Liu, Kai; Wu, Shan; Huang, Bo; Liu, Feng; Xu, Zhen

    2016-01-01

    In wireless sensor networks, in order to satisfy the requirement of long working time of energy-limited nodes, we need to design an energy-efficient and lifetime-extended medium access control (MAC) protocol. In this paper, a node cooperation mechanism that one or multiple nodes with higher channel gain and sufficient residual energy help a sender relay its data packets to its recipient is employed to achieve this objective. We first propose a transmission power optimization algorithm to prolong network lifetime by optimizing the transmission powers of the sender and its cooperative nodes to maximize their minimum residual energy after their data packet transmissions. Based on it, we propose a corresponding power-optimized cooperative MAC protocol. A cooperative node contention mechanism is designed to ensure that the sender can effectively select a group of cooperative nodes with the lowest energy consumption and the best channel quality for cooperative transmissions, thus further improving the energy efficiency. Simulation results show that compared to typical MAC protocol with direct transmissions and energy-efficient cooperative MAC protocol, the proposed cooperative MAC protocol can efficiently improve the energy efficiency and extend the network lifetime. PMID:27706079

  18. E-novo: an automated workflow for efficient structure-based lead optimization.

    PubMed

    Pearce, Bradley C; Langley, David R; Kang, Jia; Huang, Hongwei; Kulkarni, Amit

    2009-07-01

    An automated E-Novo protocol designed as a structure-based lead optimization tool was prepared through Pipeline Pilot with existing CHARMm components in Discovery Studio. A scaffold core having 3D binding coordinates of interest is generated from a ligand-bound protein structural model. Ligands of interest are generated from the scaffold using an R-group fragmentation/enumeration tool within E-Novo, with their cores aligned. The ligand side chains are conformationally sampled and are subjected to core-constrained protein docking, using a modified CHARMm-based CDOCKER method to generate top poses along with CDOCKER energies. In the final stage of E-Novo, a physics-based binding energy scoring function ranks the top ligand CDOCKER poses using a more accurate Molecular Mechanics-Generalized Born with Surface Area method. Correlation of the calculated ligand binding energies with experimental binding affinities were used to validate protocol performance. Inhibitors of Src tyrosine kinase, CDK2 kinase, beta-secretase, factor Xa, HIV protease, and thrombin were used to test the protocol using published ligand crystal structure data within reasonably defined binding sites. In-house Respiratory Syncytial Virus inhibitor data were used as a more challenging test set using a hand-built binding model. Least squares fits for all data sets suggested reasonable validation of the protocol within the context of observed ligand binding poses. The E-Novo protocol provides a convenient all-in-one structure-based design process for rapid assessment and scoring of lead optimization libraries.

  19. Optimized tomography of continuous variable systems using excitation counting

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang

    2016-11-01

    We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

  20. Establishment and optimization of NMR-based cell metabonomics study protocols for neonatal Sprague-Dawley rat cardiomyocytes.

    PubMed

    Zhang, Ming; Sun, Bo; Zhang, Qi; Gao, Rong; Liu, Qiao; Dong, Fangting; Fang, Haiqin; Peng, Shuangqing; Li, Famei; Yan, Xianzhong

    2017-01-15

    A quenching, harvesting, and extraction protocol was optimized for cardiomyocytes NMR metabonomics analysis in this study. Trypsin treatment and direct scraping cells in acetonitrile were compared for sample harvesting. The results showed trypsin treatment cause normalized concentration increasing of phosphocholine and metabolites leakage, since the trypsin-induced membrane broken and long term harvesting procedures. Then the intracellular metabolite extraction efficiency of methanol and acetonitrile were compared. As a result, washing twice with phosphate buffer, direct scraping cells and extracting with acetonitrile were chosen to prepare cardiomyocytes extracts samples for metabonomics studies. This optimized protocol is rapid, effective, and exhibits greater metabolite retention. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Quantitative Assessment of In-solution Digestion Efficiency Identifies Optimal Protocols for Unbiased Protein Analysis*

    PubMed Central

    León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.

    2013-01-01

    The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921

  2. Increased efficiency of mammalian somatic cell hybrid production under microgravity conditions during ballistic rocket flight

    NASA Technical Reports Server (NTRS)

    Schnettler, R.; Gessner, P.; Zimmermann, U.; Neil, G. A.; Urnovitz, H. B.

    1989-01-01

    The electrofusion of hybridoma cell lines under short-duration microgravity during a flight of the TEXUS 18 Black Brand ballistic sounding rocket at Kiruna, Sweden is reported. The fusion partners, growth medium, cell fusion medium, cell fusion, cell viability in the fusion medium, and postfusion cell culture are described, and the rocket, cell fusion chamber, apparatus, and module are examined. The experimental timeline, the effects of fusion medium and incubation time on cell viability and hybrid yields, and the effect of microgravity on hybrid yields are considered.

  3. SU-F-18C-01: Minimum Detectability Analysis for Comprehensive Sized Based Optimization of Image Quality and Radiation Dose Across CT Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smitherman, C; Chen, B; Samei, E

    2014-06-15

    Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less

  4. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    PubMed

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  5. Optimization of Native and Formaldehyde iPOND Techniques for Use in Suspension Cells.

    PubMed

    Wiest, Nathaniel E; Tomkinson, Alan E

    2017-01-01

    The isolation of proteins on nascent DNA (iPOND) technique developed by the Cortez laboratory allows a previously unparalleled ability to examine proteins associated with replicating and newly synthesized DNA in mammalian cells. Both the original, formaldehyde-based iPOND technique and a more recent derivative, accelerated native iPOND (aniPOND), have mostly been performed in adherent cell lines. Here, we describe modifications to both protocols for use with suspension cell lines. These include cell culture, pulse, and chase conditions that optimize sample recovery in both protocols using suspension cells and several key improvements to the published aniPOND technique that reduce sample loss, increase signal to noise, and maximize sample recovery. Additionally, we directly and quantitatively compare the iPOND and aniPOND protocols to test the strengths and limitations of both. Finally, we present a detailed protocol to perform the optimized aniPOND protocol in suspension cell lines. © 2017 Elsevier Inc. All rights reserved.

  6. An Overview and Analysis of Mobile Internet Protocols in Cellular Environments.

    ERIC Educational Resources Information Center

    Chao, Han-Chieh

    2001-01-01

    Notes that cellular is the inevitable future architecture for the personal communication service system. Discusses the current cellular support based on Mobile Internet Protocol version 6 (Ipv6) and points out the shortfalls of using Mobile IP. Highlights protocols especially for mobile management schemes which can optimize a high-speed mobile…

  7. Pregnancy Research on Osteopathic Manipulation Optimizing Treatment Effects: The PROMOTE Study Protocol.

    PubMed

    Hensel, Kendi L; Carnes, Michael S; Stoll, Scott T

    2016-11-01

    The structural and physiologic changes in a woman's body during pregnancy can predispose pregnant women to low back pain and its associated disability, as well as to complications of pregnancy, labor, and delivery. Anecdotal and empirical evidence has indicated that osteopathic manipulative treatment (OMT) may be efficacious in improving pain and functionality in women who are pregnant. Based on that premise, the Pregnancy Research on Osteopathic Manipulation Optimizing Treatment Effects (PROMOTE) study was designed as a prospective, randomized, placebo-controlled, and blinded clinical trial to evaluate the efficacy of an OMT protocol for pain during third-trimester pregnancy. The OMT protocol developed for the PROMOTE study was based on physiologic theory and the concept of the interrelationship of structure and function. The 12 well-defined, standardized OMT techniques used in the protocol are commonly taught at osteopathic medical schools in the United States. These techniques can be easily replicated as a 20-minute protocol applied in conjunction with usual prenatal care, thus making it feasible to implement into clinical practice. This article presents an overview of the study design and treatment protocols used in the PROMOTE study.

  8. Bulk Data Dissemination in Low Power Sensor Networks: Present and Future Directions

    PubMed Central

    Xu, Zhirong; Hu, Tianlei; Song, Qianshu

    2017-01-01

    Wireless sensor network-based (WSN-based) applications need an efficient and reliable data dissemination service to facilitate maintenance, management and data distribution tasks. As WSNs nowadays are becoming pervasive and data intensive, bulk data dissemination protocols have been extensively studied recently. This paper provides a comprehensive survey of the state-of-the-art bulk data dissemination protocols. The large number of papers available in the literature propose various techniques to optimize the dissemination protocols. Different from the existing survey works which separately explores the building blocks of dissemination, our work categorizes the literature according to the optimization purposes: Reliability, Scalability and Transmission/Energy efficiency. By summarizing and reviewing the key insights and techniques, we further discuss on the future directions for each category. Our survey helps unveil three key findings for future direction: (1) The recent advances in wireless communications (e.g., study on cross-technology interference, error estimating codes, constructive interference, capture effect) can be potentially exploited to support further optimization on the reliability and energy efficiency of dissemination protocols; (2) Dissemination in multi-channel, multi-task and opportunistic networks requires more efforts to fully exploit the spatial-temporal network resources to enhance the data propagation; (3) Since many designs incur changes on MAC layer protocols, the co-existence of dissemination with other network protocols is another problem left to be addressed. PMID:28098830

  9. Intravenous Ketamine Infusions for Neuropathic Pain Management: A Promising Therapy in Need of Optimization.

    PubMed

    Maher, Dermot P; Chen, Lucy; Mao, Jianren

    2017-02-01

    Intravenous ketamine infusions have been used extensively to treat often-intractable neuropathic pain conditions. Because there are many widely divergent ketamine infusion protocols described in the literature, the variation in these protocols presents a challenge for direct comparison of one protocol with another and in discerning an optimal protocol. Careful examination of the published literature suggests that ketamine infusions can be useful to treat neuropathic pain and that certain characteristics of ketamine infusions may be associated with better clinical outcomes. Increased duration of relief from neuropathic pain is associated with (1) higher total infused doses of ketamine; (2) prolonged infusion durations, although the rate of infusion does not appear to be a factor; and (3) coadministration of adjunct medications such as midazolam and/or clonidine that mitigate some of the unpleasant psychomimetic side effects. However, there are few studies designed to optimize ketamine infusion protocols by defining what an effective infusion protocol entails with regard to a respective neuropathic pain condition. Therefore, despite common clinical practice, the current state of the literature leaves the use of ketamine infusions without meaningful guidance from high-quality comparative evidence. The objectives of this topical review are to (1) analyze the available clinical evidence related to ketamine infusion protocols and (2) call for clinical studies to identify optimal ketamine infusion protocols tailored for individual neuropathic pain conditions. The Oxford Center for Evidence-Based Medicine classification for levels of evidence was used to stratify the grades of clinical recommendation for each infusion variable studied.

  10. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  11. Optimization of Saanen sperm genes amplification: evaluation of standardized protocols in genetically uncharacterized rural goats reared under a subtropical environment.

    PubMed

    Barbour, Elie K; Saade, Maya F; Sleiman, Fawwak T; Hamadeh, Shady K; Mouneimne, Youssef; Kassaifi, Zeina; Kayali, Ghazi; Harakeh, Steve; Jaber, Lina S; Shaib, Houssam A

    2012-10-01

    The purpose of this research is to optimize quantitatively the amplification of specific sperm genes in reference genomically characterized Saanen goat and to evaluate the standardized protocols applicability on sperms of uncharacterized genome of rural goats reared under subtropical environment for inclusion in future selection programs. The optimization of the protocols in Saanen sperms included three production genes (growth hormone (GH) exons 2, 3, and 4, αS1-casein (CSN1S1), and α-lactalbumin) and two health genes (MHC class II DRB and prion (PrP)). The optimization was based on varying the primers concentrations and the inclusion of a PCR cosolvent (Triton X). The impact of the studied variables on statistically significant increase in the yield of amplicons was noticed in four out of five (80%) optimized protocols, namely in those related to GH, CSN1S1, α-lactalbumin, and PrP genes (P < 0.05). There was no significant difference in the yield of amplicons related to MHC class II DRB gene, regardless of the variables used (P > 0.05). The applicability of the optimized protocols of Saanen sperm genes on amplification of uncharacterized rural goat sperms revealed a 100% success in tested individuals for amplification of GH, CSN1S1, α-lactalbumin, and MHC class II DRB genes and a 75% success for the PrP gene. The significant success in applicability of the Saanen quantitatively optimized protocols to other uncharacterized genome of rural goats allows for their inclusion in future selection, targeting the sustainability of this farming system in a subtropical environment and the improvement of the farmers livelihood.

  12. The Aeronautical Data Link: Taxonomy, Architectural Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Goode, Plesent W.

    2002-01-01

    The future Communication, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) System will rely on global satellite navigation, and ground-based and satellite based communications via Multi-Protocol Networks (e.g. combined Aeronautical Telecommunications Network (ATN)/Internet Protocol (IP)) to bring about needed improvements in efficiency and safety of operations to meet increasing levels of air traffic. This paper will discuss the development of an approach that completely describes optimal data link architecture configuration and behavior to meet the multiple conflicting objectives of concurrent and different operations functions. The practical application of the approach enables the design and assessment of configurations relative to airspace operations phases. The approach includes a formal taxonomic classification, an architectural analysis methodology, and optimization techniques. The formal taxonomic classification provides a multidimensional correlation of data link performance with data link service, information protocol, spectrum, and technology mode; and to flight operations phase and environment. The architectural analysis methodology assesses the impact of a specific architecture configuration and behavior on the local ATM system performance. Deterministic and stochastic optimization techniques maximize architectural design effectiveness while addressing operational, technology, and policy constraints.

  13. Towards a hybrid energy efficient multi-tree-based optimized routing protocol for wireless networks.

    PubMed

    Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan

    2012-12-13

    This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm.

  14. Towards a Hybrid Energy Efficient Multi-Tree-Based Optimized Routing Protocol for Wireless Networks

    PubMed Central

    Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan

    2012-01-01

    This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm. PMID:23443398

  15. Characterization of the multiple resistance traits of somatic hybrids between Solanum cardiophyllum Lindl. and two commercial potato cultivars.

    PubMed

    Thieme, Ramona; Rakosy-Tican, Elena; Nachtigall, Marion; Schubert, Jörg; Hammann, Thilo; Antonova, Olga; Gavrilenko, Tatjana; Heimbach, Udo; Thieme, Thomas

    2010-10-01

    Interspecific somatic hybrids between commercial cultivars of potato Solanum tuberosum L. Agave and Delikat and the wild diploid species Solanum cardiophyllum Lindl. (cph) were produced by protoplast electrofusion. The hybrid nature of the regenerated plants was confirmed by flow cytometry, simple sequence repeat (SSR), amplified fragment length polymorphism (AFLP), microsatellite-anchored fragment length polymorphism (MFLP) markers and morphological analysis. Somatic hybrids were assessed for their resistance to Colorado potato beetle (CPB) using a laboratory bioassay, to Potato virus Y (PVY) by mechanical inoculation and field trials, and foliage blight in a greenhouse and by field trials. Twenty-four and 26 somatic hybrids of cph + cv. Agave or cph + cv. Delikat, respectively, showed no symptoms of infection with PVY, of which 3 and 12, respectively, were also resistant to foliage blight. One hybrid of cph + Agave performed best in CPB and PVY resistance tests. Of the somatic hybrids that were evaluated for their morphology and tuber yield in the field for 3 years, four did not differ significantly in tuber yield from the parental and standard cultivars. Progeny of hybrids was obtained by pollinating them with pollen from a cultivar, selfing or cross-pollination. The results confirm that protoplast electrofusion can be used to transfer the CPB, PVY and late blight resistance of cph into somatic hybrids. These resistant somatic hybrids can be used in pre-breeding studies, molecular characterization and for increasing the genetic diversity available for potato breeding by marker-assisted combinatorial introgression into the potato gene pool.

  16. Achievable rate maximization for decode-and-forward MIMO-OFDM networks with an energy harvesting relay.

    PubMed

    Du, Guanyao; Yu, Jianjun

    2016-01-01

    This paper investigates the system achievable rate for the multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system with an energy harvesting (EH) relay. Firstly we propose two protocols, time switching-based decode-and-forward relaying (TSDFR) and a flexible power splitting-based DF relaying (PSDFR) protocol by considering two practical receiver architectures, to enable the simultaneous information processing and energy harvesting at the relay. In PSDFR protocol, we introduce a temporal parameter to describe the time division pattern between the two phases which makes the protocol more flexible and general. In order to explore the system performance limit, we discuss the system achievable rate theoretically and formulate two optimization problems for the proposed protocols to maximize the system achievable rate. Since the problems are non-convex and difficult to solve, we first analyze them theoretically and get some explicit results, then design an augmented Lagrangian penalty function (ALPF) based algorithm for them. Numerical results are provided to validate the accuracy of our analytical results and the effectiveness of the proposed ALPF algorithm. It is shown that, PSDFR outperforms TSDFR to achieve higher achievable rate in such a MIMO-OFDM relaying system. Besides, we also investigate the impacts of the relay location, the number of antennas and the number of subcarriers on the system performance. Specifically, it is shown that, the relay position greatly affects the system performance of both protocols, and relatively worse achievable rate is achieved when the relay is placed in the middle of the source and the destination. This is different from the MIMO-OFDM DF relaying system without EH. Moreover, the optimal factor which indicates the time division pattern between the two phases in the PSDFR protocol is always above 0.8, which means that, the common division of the total transmission time into two equal phases in previous work applying PS-based receiver is not optimal.

  17. Use of C-Arm Cone Beam CT During Hepatic Radioembolization: Protocol Optimization for Extrahepatic Shunting and Parenchymal Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoven, Andor F. van den, E-mail: a.f.vandenhoven@umcutrecht.nl; Prince, Jip F.; Keizer, Bart de

    PurposeTo optimize a C-arm computed tomography (CT) protocol for radioembolization (RE), specifically for extrahepatic shunting and parenchymal enhancement.Materials and MethodsA prospective development study was performed per IDEAL recommendations. A literature-based protocol was applied in patients with unresectable and chemorefractory liver malignancies undergoing an angiography before radioembolization. Contrast and scan settings were adjusted stepwise and repeatedly reviewed in a consensus meeting. Afterwards, two independent raters analyzed all scans. A third rater evaluated the SPECT/CT scans as a reference standard for extrahepatic shunting and lack of target segment perfusion.ResultsFifty scans were obtained in 29 procedures. The first protocol, using a 6 s delaymore » and 10 s scan, showed insufficient parenchymal enhancement. In the second protocol, the delay was determined by timing parenchymal enhancement on DSA power injection (median 8 s, range 4–10 s): enhancement improved, but breathing artifacts increased (from 0 to 27 %). Since the third protocol with a 5 s scan decremented subjective image quality, the second protocol was deemed optimal. Median CNR (range) was 1.7 (0.6–3.2), 2.2 (−1.4–4.0), and 2.1 (−0.3–3.0) for protocol 1, 2, and 3 (p = 0.80). Delineation of perfused segments was possible in 57, 73, and 44 % of scans (p = 0.13). In all C-arm CTs combined, the negative predictive value was 95 % for extrahepatic shunting and 83 % for lack of target segment perfusion.ConclusionAn optimized C-arm CT protocol was developed that can be used to detect extrahepatic shunts and non-perfusion of target segments during RE.« less

  18. Use of a channelized Hotelling observer to assess CT image quality and optimize dose reduction for iteratively reconstructed images.

    PubMed

    Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H

    2017-07-01

    The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.

  19. Optimized quantum sensing with a single electron spin using real-time adaptive measurements.

    PubMed

    Bonato, C; Blok, M S; Dinani, H T; Berry, D W; Markham, M L; Twitchen, D J; Hanson, R

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz(-1/2) over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  20. Optimized quantum sensing with a single electron spin using real-time adaptive measurements

    NASA Astrophysics Data System (ADS)

    Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  1. Modelling optimal location for pre-hospital helicopter emergency medical services.

    PubMed

    Schuurman, Nadine; Bell, Nathaniel J; L'Heureux, Randy; Hameed, Syed M

    2009-05-09

    Increasing the range and scope of early activation/auto launch helicopter emergency medical services (HEMS) may alleviate unnecessary injury mortality that disproportionately affects rural populations. To date, attempts to develop a quantitative framework for the optimal location of HEMS facilities have been absent. Our analysis used five years of critical care data from tertiary health care facilities, spatial data on origin of transport and accurate road travel time catchments for tertiary centres. A location optimization model was developed to identify where the expansion of HEMS would cover the greatest population among those currently underserved. The protocol was developed using geographic information systems (GIS) to measure populations, distances and accessibility to services. Our model determined Royal Inland Hospital (RIH) was the optimal site for an expanded HEMS - based on denominator population, distance to services and historical usage patterns. GIS based protocols for location of emergency medical resources can provide supportive evidence for allocation decisions - especially when resources are limited. In this study, we were able to demonstrate conclusively that a logical choice exists for location of additional HEMS. This protocol could be extended to location analysis for other emergency and health services.

  2. Three-input majority function as the unique optimal function for the bias amplification using nonlocal boxes

    NASA Astrophysics Data System (ADS)

    Mori, Ryuhei

    2016-11-01

    Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006), 10.1103/PhysRevLett.96.250401] showed that shared nonlocal boxes with a CHSH (Clauser, Horne, Shimony, and Holt) probability greater than 3/+√{6 } 6 yield trivial communication complexity. There still exists a gap with the maximum CHSH probability 2/+√{2 } 4 achievable by quantum mechanics. It is an interesting open question to determine the exact threshold for the trivial communication complexity. Brassard et al.'s idea is based on recursive bias amplification by the three-input majority function. It was not obvious if another choice of function exhibits stronger bias amplification. We show that the three-input majority function is the unique optimal function, so that one cannot improve the threshold 3/+√{6 } 6 by Brassard et al.'s bias amplification. In this work, protocols for computing the function used for the bias amplification are restricted to be nonadaptive protocols or a particular adaptive protocol inspired by Pawłowski et al.'s protocol for information causality [Nature (London) 461, 1101 (2009), 10.1038/nature08400]. We first show an adaptive protocol inspired by Pawłowski et al.'s protocol, and then show that the adaptive protocol improves upon nonadaptive protocols. Finally, we show that the three-input majority function is the unique optimal function for the bias amplification if we apply the adaptive protocol to each step of the bias amplification.

  3. Rapid Design of Knowledge-Based Scoring Potentials for Enrichment of Near-Native Geometries in Protein-Protein Docking.

    PubMed

    Sasse, Alexander; de Vries, Sjoerd J; Schindler, Christina E M; de Beauchêne, Isaure Chauvot; Zacharias, Martin

    2017-01-01

    Protein-protein docking protocols aim to predict the structures of protein-protein complexes based on the structure of individual partners. Docking protocols usually include several steps of sampling, clustering, refinement and re-scoring. The scoring step is one of the bottlenecks in the performance of many state-of-the-art protocols. The performance of scoring functions depends on the quality of the generated structures and its coupling to the sampling algorithm. A tool kit, GRADSCOPT (GRid Accelerated Directly SCoring OPTimizing), was designed to allow rapid development and optimization of different knowledge-based scoring potentials for specific objectives in protein-protein docking. Different atomistic and coarse-grained potentials can be created by a grid-accelerated directly scoring dependent Monte-Carlo annealing or by a linear regression optimization. We demonstrate that the scoring functions generated by our approach are similar to or even outperform state-of-the-art scoring functions for predicting near-native solutions. Of additional importance, we find that potentials specifically trained to identify the native bound complex perform rather poorly on identifying acceptable or medium quality (near-native) solutions. In contrast, atomistic long-range contact potentials can increase the average fraction of near-native poses by up to a factor 2.5 in the best scored 1% decoys (compared to existing scoring), emphasizing the need of specific docking potentials for different steps in the docking protocol.

  4. Compliance with AAPM Practice Guideline 1.a: CT Protocol Management and Review — from the perspective of a university hospital

    PubMed Central

    Bour, Robert K.; Pozniak, Myron; Ranallo, Frank N.

    2015-01-01

    The purpose of this paper is to describe our experience with the AAPM Medical Physics Practice Guideline 1.a: “CT Protocol Management and Review Practice Guideline”. Specifically, we will share how our institution's quality management system addresses the suggestions within the AAPM practice report. We feel this paper is needed as it was beyond the scope of the AAPM practice guideline to provide specific details on fulfilling individual guidelines. Our hope is that other institutions will be able to emulate some of our practices and that this article would encourage other types of centers (e.g., community hospitals) to share their methodology for approaching CT protocol optimization and quality control. Our institution had a functioning CT protocol optimization process, albeit informal, since we began using CT. Recently, we made our protocol development and validation process compliant with a number of the ISO 9001:2008 clauses and this required us to formalize the roles of the members of our CT protocol optimization team. We rely heavily on PACS‐based IT solutions for acquiring radiologist feedback on the performance of our CT protocols and the performance of our CT scanners in terms of dose (scanner output) and the function of the automatic tube current modulation. Specific details on our quality management system covering both quality control and ongoing optimization have been provided. The roles of each CT protocol team member have been defined, and the critical role that IT solutions provides for the management of files and the monitoring of CT protocols has been reviewed. In addition, the invaluable role management provides by being a champion for the project has been explained; lack of a project champion will mitigate the efforts of a CT protocol optimization team. Meeting the guidelines set forth in the AAPM practice guideline was not inherently difficult, but did, in our case, require the cooperation of radiologists, technologists, physicists, IT, administrative staff, and hospital management. Some of the IT solutions presented in this paper are novel and currently unique to our institution. PACS number: 87.57.Q PMID:26103176

  5. Purified Dendritic Cell-Tumor Fusion Hybrids Supplemented with Non-Adherent Dendritic Cells Fraction Are Superior Activators of Antitumor Immunity

    PubMed Central

    Wang, Yucai; Liu, Yunyan; Zheng, Lianhe

    2014-01-01

    Background Strong evidence supports the DC-tumor fusion hybrid vaccination strategy, but the best fusion product components to use remains controversial. Fusion products contain DC-tumor fusion hybrids, unfused DCs and unfused tumor cells. Various fractions have been used in previous studies, including purified hybrids, the adherent cell fraction or the whole fusion mixture. The extent to which the hybrids themselves or other components are responsible for antitumor immunity or which components should be used to maximize the antitumor immunity remains unknown. Methods Patient-derived breast tumor cells and DCs were electro-fused and purified. The antitumor immune responses induced by the purified hybrids and the other components were compared. Results Except for DC-tumor hybrids, the non-adherent cell fraction containing mainly unfused DCs also contributed a lot in antitumor immunity. Purified hybrids supplemented with the non-adherent cell population elicited the most powerful antitumor immune response. After irradiation and electro-fusion, tumor cells underwent necrosis, and the unfused DCs phagocytosed the necrotic tumor cells or tumor debris, which resulted in significant DC maturation. This may be the immunogenicity mechanism of the non-adherent unfused DCs fraction. Conclusions The non-adherent cell fraction (containing mainly unfused DCs) from total DC/tumor fusion products had enhanced immunogenicity that resulted from apoptotic/necrotic tumor cell phagocytosis and increased DC maturation. Purified fusion hybrids supplemented with the non-adherent cell population enhanced the antitumor immune responses, avoiding unnecessary use of the tumor cell fraction, which has many drawbacks. Purified hybrids supplemented with the non-adherent cell fraction may represent a better approach to the DC-tumor fusion hybrid vaccination strategy. PMID:24466232

  6. Practical quantum appointment scheduling

    NASA Astrophysics Data System (ADS)

    Touchette, Dave; Lovitz, Benjamin; Lütkenhaus, Norbert

    2018-04-01

    We propose a protocol based on coherent states and linear optics operations for solving the appointment-scheduling problem. Our main protocol leaks strictly less information about each party's input than the optimal classical protocol, even when considering experimental errors. Along with the ability to generate constant-amplitude coherent states over two modes, this protocol requires the ability to transfer these modes back-and-forth between the two parties multiple times with very low losses. The implementation requirements are thus still challenging. Along the way, we develop tools to study quantum information cost of interactive protocols in the finite regime.

  7. A two-hop based adaptive routing protocol for real-time wireless sensor networks.

    PubMed

    Rachamalla, Sandhya; Kancherla, Anitha Sheela

    2016-01-01

    One of the most important and challenging issues in wireless sensor networks (WSNs) is to optimally manage the limited energy of nodes without degrading the routing efficiency. In this paper, we propose an energy-efficient adaptive routing mechanism for WSNs, which saves energy of nodes by removing the much delayed packets without degrading the real-time performance of the used routing protocol. It uses the adaptive transmission power algorithm which is based on the attenuation of the wireless link to improve the energy efficiency. The proposed routing mechanism can be associated with any geographic routing protocol and its performance is evaluated by integrating with the well known two-hop based real-time routing protocol, PATH and the resulting protocol is energy-efficient adaptive routing protocol (EE-ARP). The EE-ARP performs well in terms of energy consumption, deadline miss ratio, packet drop and end-to-end delay.

  8. Fully Automated Sample Preparation for Ultrafast N-Glycosylation Analysis of Antibody Therapeutics.

    PubMed

    Szigeti, Marton; Lew, Clarence; Roby, Keith; Guttman, Andras

    2016-04-01

    There is a growing demand in the biopharmaceutical industry for high-throughput, large-scale N-glycosylation profiling of therapeutic antibodies in all phases of product development, but especially during clone selection when hundreds of samples should be analyzed in a short period of time to assure their glycosylation-based biological activity. Our group has recently developed a magnetic bead-based protocol for N-glycosylation analysis of glycoproteins to alleviate the hard-to-automate centrifugation and vacuum-centrifugation steps of the currently used protocols. Glycan release, fluorophore labeling, and cleanup were all optimized, resulting in a <4 h magnetic bead-based process with excellent yield and good repeatability. This article demonstrates the next level of this work by automating all steps of the optimized magnetic bead-based protocol from endoglycosidase digestion, through fluorophore labeling and cleanup with high-throughput sample processing in 96-well plate format, using an automated laboratory workstation. Capillary electrophoresis analysis of the fluorophore-labeled glycans was also optimized for rapid (<3 min) separation to accommodate the high-throughput processing of the automated sample preparation workflow. Ultrafast N-glycosylation analyses of several commercially relevant antibody therapeutics are also shown and compared to their biosimilar counterparts, addressing the biological significance of the differences. © 2015 Society for Laboratory Automation and Screening.

  9. An efficient protocol for the synthesis of highly sensitive indole imines utilizing green chemistry: optimization of reaction conditions.

    PubMed

    Nisar, Bushra; Rubab, Syeda Laila; Raza, Abdul Rauf; Tariq, Sobia; Sultan, Ayesha; Tahir, Muhammad Nawaz

    2018-04-11

    Novel and highly sensitive indole-based imines have been synthesized. Their synthesis has been compared employing a variety of protocols. Ultimately, a convenient, economical and high yielding set of conditions employing green chemistry have been designed for their synthesis.

  10. Optimization of the scan protocols for CT-based material extraction in small animal PET/CT studies

    NASA Astrophysics Data System (ADS)

    Yang, Ching-Ching; Yu, Jhih-An; Yang, Bang-Hung; Wu, Tung-Hsin

    2013-12-01

    We investigated the effects of scan protocols on CT-based material extraction to minimize radiation dose while maintaining sufficient image information in small animal studies. The phantom simulation experiments were performed with the high dose (HD), medium dose (MD) and low dose (LD) protocols at 50, 70 and 80 kVp with varying mA s. The reconstructed CT images were segmented based on Hounsfield unit (HU)-physical density (ρ) calibration curves and the dual-energy CT-based (DECT) method. Compared to the (HU;ρ) method performed on CT images acquired with the 80 kVp HD protocol, a 2-fold improvement in segmentation accuracy and a 7.5-fold reduction in radiation dose were observed when the DECT method was performed on CT images acquired with the 50/80 kVp LD protocol, showing the possibility to reduce radiation dose while achieving high segmentation accuracy.

  11. Numerical simulation of the optimal two-mode attacks for two-way continuous-variable quantum cryptography in reverse reconciliation

    NASA Astrophysics Data System (ADS)

    Zhang, Yichen; Li, Zhengyu; Zhao, Yijia; Yu, Song; Guo, Hong

    2017-02-01

    We analyze the security of the two-way continuous-variable quantum key distribution protocol in reverse reconciliation against general two-mode attacks, which represent all accessible attacks at fixed channel parameters. Rather than against one specific attack model, the expression of secret key rates of the two-way protocol are derived against all accessible attack models. It is found that there is an optimal two-mode attack to minimize the performance of the protocol in terms of both secret key rates and maximal transmission distances. We identify the optimal two-mode attack, give the specific attack model of the optimal two-mode attack and show the performance of the two-way protocol against the optimal two-mode attack. Even under the optimal two-mode attack, the performances of two-way protocol are still better than the corresponding one-way protocol, which shows the advantage of making double use of the quantum channel and the potential of long-distance secure communication using a two-way protocol.

  12. High-Throughput Screening Assay for Embryoid Body Differentiation of Human Embryonic Stem Cells

    PubMed Central

    Outten, Joel T.; Gadue, Paul; French, Deborah L.; Diamond, Scott L.

    2012-01-01

    Serum-free human pluripotent stem cell media offer the potential to develop reproducible clinically applicable differentiation strategies and protocols. The vast array of possible growth factor and cytokine combinations for media formulations makes differentiation protocol optimization both labor and cost-intensive. This unit describes a 96-well plate, 4-color flow cytometry-based screening assay to optimize pluripotent stem cell differentiation protocols. We provide conditions both to differentiate human embryonic stem cells (hESCs) to the three primary germ layers, ectoderm, endoderm, and mesoderm, and to utilize flow cytometry to distinguish between them. This assay exhibits low inter-well variability and can be utilized to efficiently screen a variety of media formulations, reducing cost, incubator space, and labor. Protocols can be adapted to a variety of differentiation stages and lineages. PMID:22415836

  13. Profiling optimization for big data transfer over dedicated channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, D.; Wu, Qishi; Rao, Nageswara S

    The transfer of big data is increasingly supported by dedicated channels in high-performance networks, where transport protocols play an important role in maximizing applicationlevel throughput and link utilization. The performance of transport protocols largely depend on their control parameter settings, but it is prohibitively time consuming to conduct an exhaustive search in a large parameter space to find the best set of parameter values. We propose FastProf, a stochastic approximation-based transport profiler, to quickly determine the optimal operational zone of a given data transfer protocol/method over dedicated channels. We implement and test the proposed method using both emulations based onmore » real-life performance measurements and experiments over physical connections with short (2 ms) and long (380 ms) delays. Both the emulation and experimental results show that FastProf significantly reduces the profiling overhead while achieving a comparable level of end-to-end throughput performance with the exhaustive search-based approach.« less

  14. Phase Transition in Protocols Minimizing Work Fluctuations

    NASA Astrophysics Data System (ADS)

    Solon, Alexandre P.; Horowitz, Jordan M.

    2018-05-01

    For two canonical examples of driven mesoscopic systems—a harmonically trapped Brownian particle and a quantum dot—we numerically determine the finite-time protocols that optimize the compromise between the standard deviation and the mean of the dissipated work. In the case of the oscillator, we observe a collection of protocols that smoothly trade off between average work and its fluctuations. However, for the quantum dot, we find that as we shift the weight of our optimization objective from average work to work standard deviation, there is an analog of a first-order phase transition in protocol space: two distinct protocols exchange global optimality with mixed protocols akin to phase coexistence. As a result, the two types of protocols possess qualitatively different properties and remain distinct even in the infinite duration limit: optimal-work-fluctuation protocols never coalesce with the minimal-work protocols, which therefore never become quasistatic.

  15. Optimization of a human IgG B-cell ELISpot assay for the analysis of vaccine-induced B-cell responses.

    PubMed

    Jahnmatz, Maja; Kesa, Gun; Netterlid, Eva; Buisman, Anne-Marie; Thorstensson, Rigmor; Ahlborg, Niklas

    2013-05-31

    B-cell responses after infection or vaccination are often measured as serum titers of antigen-specific antibodies. Since this does not address the aspect of memory B-cell activity, it may not give a complete picture of the B-cell response. Analysis of memory B cells by ELISpot is therefore an important complement to conventional serology. B-cell ELISpot was developed more than 25 years ago and many assay protocols/reagents would benefit from optimization. We therefore aimed at developing an optimized B-cell ELISpot for the analysis of vaccine-induced human IgG-secreting memory B cells. A protocol was developed based on new monoclonal antibodies to human IgG and biotin-avidin amplification to increase the sensitivity. After comparison of various compounds commonly used to in vitro-activate memory B cells for ELISpot analysis, the TLR agonist R848 plus interleukin (IL)-2 was selected as the most efficient activator combination. The new protocol was subsequently compared to an established protocol, previously used in vaccine studies, based on polyclonal antibodies without biotin avidin amplification and activation of memory B-cells using a mix of antigen, CpG, IL-2 and IL-10. The new protocol displayed significantly better detection sensitivity, shortened the incubation time needed for the activation of memory B cells and reduced the amount of antigen required for the assay. The functionality of the new protocol was confirmed by analyzing specific memory B cells to five different antigens, induced in a limited number of subjects vaccinated against tetanus, diphtheria and pertussis. The limited number of subjects did not allow for a direct comparison with other vaccine studies. Optimization of the B-cell ELISpot will facilitate an improved analysis of IgG-secreting B cells in vaccine studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Impact of uncertain head tissue conductivity in the optimization of transcranial direct current stimulation for an auditory target

    NASA Astrophysics Data System (ADS)

    Schmidt, Christian; Wagner, Sven; Burger, Martin; van Rienen, Ursula; Wolters, Carsten H.

    2015-08-01

    Objective. Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique to modify neural excitability. Using multi-array tDCS, we investigate the influence of inter-individually varying head tissue conductivity profiles on optimal electrode configurations for an auditory cortex stimulation. Approach. In order to quantify the uncertainty of the optimal electrode configurations, multi-variate generalized polynomial chaos expansions of the model solutions are used based on uncertain conductivity profiles of the compartments skin, skull, gray matter, and white matter. Stochastic measures, probability density functions, and sensitivity of the quantities of interest are investigated for each electrode and the current density at the target with the resulting stimulation protocols visualized on the head surface. Main results. We demonstrate that the optimized stimulation protocols are only comprised of a few active electrodes, with tolerable deviations in the stimulation amplitude of the anode. However, large deviations in the order of the uncertainty in the conductivity profiles could be noted in the stimulation protocol of the compensating cathodes. Regarding these main stimulation electrodes, the stimulation protocol was most sensitive to uncertainty in skull conductivity. Finally, the probability that the current density amplitude in the auditory cortex target region is supra-threshold was below 50%. Significance. The results suggest that an uncertain conductivity profile in computational models of tDCS can have a substantial influence on the prediction of optimal stimulation protocols for stimulation of the auditory cortex. The investigations carried out in this study present a possibility to predict the probability of providing a therapeutic effect with an optimized electrode system for future auditory clinical and experimental procedures of tDCS applications.

  17. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    PubMed Central

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  18. Development of a protocol to optimize electric power consumption and life cycle environmental impacts for operation of wastewater treatment plant.

    PubMed

    Piao, Wenhua; Kim, Changwon; Cho, Sunja; Kim, Hyosoo; Kim, Minsoo; Kim, Yejin

    2016-12-01

    In wastewater treatment plants (WWTPs), the portion of operating costs related to electric power consumption is increasing. If the electric power consumption decreased, however, it would be difficult to comply with the effluent water quality requirements. A protocol was proposed to minimize the environmental impacts as well as to optimize the electric power consumption under the conditions needed to meet the effluent water quality standards in this study. This protocol was comprised of six phases of procedure and was tested using operating data from S-WWTP to prove its applicability. The 11 major operating variables were categorized into three groups using principal component analysis and K-mean cluster analysis. Life cycle assessment (LCA) was conducted for each group to deduce the optimal operating conditions for each operating state. Then, employing mathematical modeling, six improvement plans to reduce electric power consumption were deduced. The electric power consumptions for suggested plans were estimated using an artificial neural network. This was followed by a second round of LCA conducted on the plans. As a result, a set of optimized improvement plans were derived for each group that were able to optimize the electric power consumption and life cycle environmental impact, at the same time. Based on these test results, the WWTP operating management protocol presented in this study is deemed able to suggest optimal operating conditions under which power consumption can be optimized with minimal life cycle environmental impact, while allowing the plant to meet water quality requirements.

  19. The Design of Finite State Machine for Asynchronous Replication Protocol

    NASA Astrophysics Data System (ADS)

    Wang, Yanlong; Li, Zhanhuai; Lin, Wei; Hei, Minglei; Hao, Jianhua

    Data replication is a key way to design a disaster tolerance system and to achieve reliability and availability. It is difficult for a replication protocol to deal with the diverse and complex environment. This means that data is less well replicated than it ought to be. To reduce data loss and to optimize replication protocols, we (1) present a finite state machine, (2) run it to manage an asynchronous replication protocol and (3) report a simple evaluation of the asynchronous replication protocol based on our state machine. It's proved that our state machine is applicable to guarantee the asynchronous replication protocol running in the proper state to the largest extent in the event of various possible events. It also can helpful to build up replication-based disaster tolerance systems to ensure the business continuity.

  20. Optimal diabatic dynamics of Majorana-based quantum gates

    NASA Astrophysics Data System (ADS)

    Rahmani, Armin; Seradjeh, Babak; Franz, Marcel

    2017-08-01

    In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin's maximum principle, we show that robust equivalent gates to perfect adiabatic braiding can be implemented in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. Focusing on thermally isolated systems, we study the effects of calibration errors and external white and 1 /f (pink) noise on Majorana-based gates. While a noise-induced antiadiabatic behavior, where a slower process creates more diabatic excitations, prohibits indefinite enhancement of the robustness of the adiabatic scheme, our fast optimal protocols exhibit remarkable stability to noise and have the potential to significantly enhance the practical performance of Majorana-based information processing.

  1. Control systems and coordination protocols of the secretory pathway.

    PubMed

    Luini, Alberto; Mavelli, Gabriella; Jung, Juan; Cancino, Jorge

    2014-01-01

    Like other cellular modules, the secretory pathway and the Golgi complex are likely to be supervised by control systems that support homeostasis and optimal functionality under all conditions, including external and internal perturbations. Moreover, the secretory apparatus must be functionally connected with other cellular modules, such as energy metabolism and protein degradation, via specific rules of interaction, or "coordination protocols". These regulatory devices are of fundamental importance for optimal function; however, they are generally "hidden" at steady state. The molecular components and the architecture of the control systems and coordination protocols of the secretory pathway are beginning to emerge through studies based on the use of controlled transport-specific perturbations aimed specifically at the detection and analysis of these internal regulatory devices.

  2. Evidence-based recommendations for bowel cleansing before colonoscopy in children: a report from a national working group.

    PubMed

    Turner, D; Levine, A; Weiss, B; Hirsh, A; Shamir, R; Shaoul, R; Berkowitz, D; Bujanover, Y; Cohen, S; Eshach-Adiv, O; Jamal, Gera; Kori, M; Lerner, A; On, A; Rachman, L; Rosenbach, Y; Shamaly, H; Shteyer, E; Silbermintz, A; Yerushalmi, B

    2010-12-01

    There are no current recommendations for bowel cleansing before colonoscopy in children. The Israeli Society of Pediatric Gastroenterology and Nutrition (ISPGAN) established an iterative working group to formulate evidence-based guidelines for bowel cleansing in children prior to colonoscopy. Data were collected by systematic review of the literature and via a national-based survey of all endoscopy units in Israel. Based on the strength of evidence, the Committee reached consensus on six recommended protocols in children. Guidelines were finalized after an open audit of ISPGAN members. Data on 900 colonoscopies per year were accrued, which represents all annual pediatric colonoscopies performed in Israel. Based on the literature review, the national survey, and the open audit, several age-stratified pediatric cleansing protocols were proposed: two PEG-ELS protocols (polyethylene-glycol with electrolyte solution); Picolax-based protocol (sodium picosulphate with magnesium citrate); sodium phosphate protocol (only in children over the age of 12 years who are at low risk for renal damage); stimulant laxative-based protocol (e. g. bisacodyl); and a PEG 3350-based protocol. A population-based analysis estimated that the acute toxicity rate of oral sodium phosphate is at most 3/7320 colonoscopies (0.041 %). Recommendations on diet and enema use are provided in relation to each proposed protocol. There is no ideal bowel cleansing regimen and, thus, various protocols are in use. We propose several evidence-based protocols to optimize bowel cleansing in children prior to colonoscopy and minimize adverse events. © Georg Thieme Verlag KG Stuttgart · New York.

  3. j5 DNA assembly design automation.

    PubMed

    Hillson, Nathan J

    2014-01-01

    Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.

  4. Prevention of Osmotic Injury to Human Umbilical Vein Endothelial Cells for Biopreservation: A First Step Toward Biobanking of Endothelial Cells for Vascular Tissue Engineering.

    PubMed

    Niu, Dan; Zhao, Gang; Liu, Xiaoli; Zhou, Ping; Cao, Yunxia

    2016-03-01

    High-survival-rate cryopreservation of endothelial cells plays a critical role in vascular tissue engineering, while optimization of osmotic injuries is the first step toward successful cryopreservation. We designed a low-cost, easy-to-use, microfluidics-based microperfusion chamber to investigate the osmotic responses of human umbilical vein endothelial cells (HUVECs) at different temperatures, and then optimized the protocols for using cryoprotective agents (CPAs) to minimize osmotic injuries and improve processes before freezing and after thawing. The fundamental cryobiological parameters were measured using the microperfusion chamber, and then, the optimized protocols using these parameters were confirmed by survival evaluation and cell proliferation experiments. It was revealed for the first time that HUVECs have an unusually small permeability coefficient for Me2SO. Even at the concentrations well established for slow freezing of cells (1.5 M), one-step removal of CPAs for HUVECs might result in inevitable osmotic injuries, indicating that multiple-step removal is essential. Further experiments revealed that multistep removal of 1.5 M Me2SO at 25°C was the best protocol investigated, in good agreement with theory. These results should prove invaluable for optimization of cryopreservation protocols of HUVECs.

  5. Integration of Molecular Dynamics Based Predictions into the Optimization of De Novo Protein Designs: Limitations and Benefits.

    PubMed

    Carvalho, Henrique F; Barbosa, Arménio J M; Roque, Ana C A; Iranzo, Olga; Branco, Ricardo J F

    2017-01-01

    Recent advances in de novo protein design have gained considerable insight from the intrinsic dynamics of proteins, based on the integration of molecular dynamics simulations protocols on the state-of-the-art de novo protein design protocols used nowadays. With this protocol we illustrate how to set up and run a molecular dynamics simulation followed by a functional protein dynamics analysis. New users will be introduced to some useful open-source computational tools, including the GROMACS molecular dynamics simulation software package and ProDy for protein structural dynamics analysis.

  6. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  7. Single-photon quantum key distribution in the presence of loss

    NASA Astrophysics Data System (ADS)

    Curty, Marcos; Moroder, Tobias

    2007-05-01

    We investigate two-way and one-way single-photon quantum key distribution (QKD) protocols in the presence of loss introduced by the quantum channel. Our analysis is based on a simple precondition for secure QKD in each case. In particular, the legitimate users need to prove that there exists no separable state (in the case of two-way QKD), or that there exists no quantum state having a symmetric extension (one-way QKD), that is compatible with the available measurements results. We show that both criteria can be formulated as a convex optimization problem known as a semidefinite program, which can be efficiently solved. Moreover, we prove that the solution to the dual optimization corresponds to the evaluation of an optimal witness operator that belongs to the minimal verification set of them for the given two-way (or one-way) QKD protocol. A positive expectation value of this optimal witness operator states that no secret key can be distilled from the available measurements results. We apply such analysis to several well-known single-photon QKD protocols under losses.

  8. Advertisement-Based Energy Efficient Medium Access Protocols for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Ray, Surjya Sarathi

    One of the main challenges that prevents the large-scale deployment of Wireless Sensor Networks (WSNs) is providing the applications with the required quality of service (QoS) given the sensor nodes' limited energy supplies. WSNs are an important tool in supporting applications ranging from environmental and industrial monitoring, to battlefield surveillance and traffic control, among others. Most of these applications require sensors to function for long periods of time without human intervention and without battery replacement. Therefore, energy conservation is one of the main goals for protocols for WSNs. Energy conservation can be performed in different layers of the protocol stack. In particular, as the medium access control (MAC) layer can access and control the radio directly, large energy savings is possible through intelligent MAC protocol design. To maximize the network lifetime, MAC protocols for WSNs aim to minimize idle listening of the sensor nodes, packet collisions, and overhearing. Several approaches such as duty cycling and low power listening have been proposed at the MAC layer to achieve energy efficiency. In this thesis, I explore the possibility of further energy savings through the advertisement of data packets in the MAC layer. In the first part of my research, I propose Advertisement-MAC or ADV-MAC, a new MAC protocol for WSNs that utilizes the concept of advertising for data contention. This technique lets nodes listen dynamically to any desired transmission and sleep during transmissions not of interest. This minimizes the energy lost in idle listening and overhearing while maintaining an adaptive duty cycle to handle variable loads. Additionally, ADV-MAC enables energy efficient MAC-level multicasting. An analytical model for the packet delivery ratio and the energy consumption of the protocol is also proposed. The analytical model is verified with simulations and is used to choose an optimal value of the advertisement period. Simulations show that the optimized ADV-MAC provides substantial energy gains (50% to 70% less than other MAC protocols for WSNs such as T-MAC and S-MAC for the scenarios investigated) while faring as well as T-MAC in terms of packet delivery ratio and latency. Although ADV-MAC provides substantial energy gains over S-MAC and T-MAC, it is not optimal in terms of energy savings because contention is done twice -- once in the Advertisement Period and once in the Data Period. In the next part of my research, the second contention in the Data Period is eliminated and the advantages of contention-based and TDMA-based protocols are combined to form Advertisement based Time-division Multiple Access (ATMA), a distributed TDMA-based MAC protocol for WSNs. ATMA utilizes the bursty nature of the traffic to prevent energy waste through advertisements and reservations for data slots. Extensive simulations and qualitative analysis show that with bursty traffic, ATMA outperforms contention-based protocols (S-MAC, T-MAC and ADV-MAC), a TDMA based protocol (TRAMA) and hybrid protocols (Z-MAC and IEEE 802.15.4). ATMA provides energy reductions of up to 80%, while providing the best packet delivery ratio (close to 100%) and latency among all the investigated protocols. Simulations alone cannot reflect many of the challenges faced by real implementations of MAC protocols, such as clock-drift, synchronization, imperfect physical layers, and irregular interference from other transmissions. Such issues may cripple a protocol that otherwise performs very well in software simulations. Hence, to validate my research, I conclude with a hardware implementation of the ATMA protocol on SORA (Software Radio), developed by Microsoft Research Asia. SORA is a reprogrammable Software Defined Radio (SDR) platform that satisfies the throughput and timing requirements of modern wireless protocols while utilizing the rich general purpose PC development environment. Experimental results obtained from the hardware implementation of ATMA closely mirror the simulation results obtained for a single hop network with 4 nodes.

  9. Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

    NASA Astrophysics Data System (ADS)

    Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam

    2017-11-01

    Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.

  10. Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Carek, David A.

    2005-01-01

    This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.

  11. Experimental high-speed network

    NASA Astrophysics Data System (ADS)

    McNeill, Kevin M.; Klein, William P.; Vercillo, Richard; Alsafadi, Yasser H.; Parra, Miguel V.; Dallas, William J.

    1993-09-01

    Many existing local area networking protocols currently applied in medical imaging were originally designed for relatively low-speed, low-volume networking. These protocols utilize small packet sizes appropriate for text based communication. Local area networks of this type typically provide raw bandwidth under 125 MHz. These older network technologies are not optimized for the low delay, high data traffic environment of a totally digital radiology department. Some current implementations use point-to-point links when greater bandwidth is required. However, the use of point-to-point communications for a total digital radiology department network presents many disadvantages. This paper describes work on an experimental multi-access local area network called XFT. The work includes the protocol specification, and the design and implementation of network interface hardware and software. The protocol specifies the Physical and Data Link layers (OSI layers 1 & 2) for a fiber-optic based token ring providing a raw bandwidth of 500 MHz. The protocol design and implementation of the XFT interface hardware includes many features to optimize image transfer and provide flexibility for additional future enhancements which include: a modular hardware design supporting easy portability to a variety of host system buses, a versatile message buffer design providing 16 MB of memory, and the capability to extend the raw bandwidth of the network to 3.0 GHz.

  12. Outcomes of Optimized over Standard Protocol of Rabbit Antithymocyte Globulin for Severe Aplastic Anemia: A Single-Center Experience

    PubMed Central

    Ge, Meili; Shao, Yingqi; Huang, Jinbo; Huang, Zhendong; Zhang, Jing; Nie, Neng; Zheng, Yizhou

    2013-01-01

    Background Previous reports showed that outcome of rabbit antithymocyte globulin (rATG) was not satisfactory as the first-line therapy for severe aplastic anemia (SAA). We explored a modifying schedule of administration of rATG. Design and Methods Outcomes of a cohort of 175 SAA patients, including 51 patients administered with standard protocol (3.55 mg/kg/d for 5 days) and 124 cases with optimized protocol (1.97 mg/kg/d for 9 days) of rATG plus cyclosporine (CSA), were analyzed retrospectively. Results Of all 175 patients, response rates at 3 and 6 months were 36.6% and 56.0%, respectively. 51 cases received standard protocol had poor responses at 3 (25.5%) and 6 months (41.2%). However, 124 patients received optimized protocol had better responses at 3 (41.1%, P = 0.14) and 6 (62.1%, P = 0.01). Higher incidences of infection (57.1% versus 37.9%, P = 0.02) and early mortality (17.9% versus 0.8%, P<0.001) occurred in patients received standard protocol compared with optimized protocol. The 5-year overall survival in favor of the optimized over standard rATG protocol (76.0% versus. 50.3%, P<0.001) was observed. By multivariate analysis, optimized protocol (RR = 2.21, P = 0.04), response at 3 months (RR = 10.31, P = 0.03) and shorter interval (<23 days) between diagnosis and initial dose of rATG (RR = 5.35, P = 0.002) were independent favorable predictors of overall survival. Conclusions Optimized instead of standard rATG protocol in combination with CSA remained efficacious as a first-line immunosuppressive regimen for SAA. PMID:23554855

  13. Control of size and aspect ratio in hydroquinone-based synthesis of gold nanorods

    NASA Astrophysics Data System (ADS)

    Morasso, Carlo; Picciolini, Silvia; Schiumarini, Domitilla; Mehn, Dora; Ojea-Jiménez, Isaac; Zanchetta, Giuliano; Vanna, Renzo; Bedoni, Marzia; Prosperi, Davide; Gramatica, Furio

    2015-08-01

    In this article, we describe how it is possible to tune the size and the aspect ratio of gold nanorods obtained using a highly efficient protocol based on the use of hydroquinone as a reducing agent by varying the amounts of CTAB and silver ions present in the "seed-growth" solution. Our approach not only allows us to prepare nanorods with a four times increased Au3+ reduction yield, when compared with the commonly used protocol based on ascorbic acid, but also allows a remarkable reduction of 50-60 % of the amount of CTAB needed. In fact, according to our findings, the concentration of CTAB present in the seed-growth solution do not linearly influence the final aspect ratio of the obtained nanorods, and an optimal concentration range between 30 and 50 mM has been identified as the one that is able to generate particles with more elongated shapes. On the optimized protocol, the effect of the concentration of Ag+ ions in the seed-growth solution and the stability of the obtained particles has also been investigated.

  14. Optimal and secure measurement protocols for quantum sensor networks

    NASA Astrophysics Data System (ADS)

    Eldredge, Zachary; Foss-Feig, Michael; Gross, Jonathan A.; Rolston, S. L.; Gorshkov, Alexey V.

    2018-04-01

    Studies of quantum metrology have shown that the use of many-body entangled states can lead to an enhancement in sensitivity when compared with unentangled states. In this paper, we quantify the metrological advantage of entanglement in a setting where the measured quantity is a linear function of parameters individually coupled to each qubit. We first generalize the Heisenberg limit to the measurement of nonlocal observables in a quantum network, deriving a bound based on the multiparameter quantum Fisher information. We then propose measurement protocols that can make use of Greenberger-Horne-Zeilinger (GHZ) states or spin-squeezed states and show that in the case of GHZ states the protocol is optimal, i.e., it saturates our bound. We also identify nanoscale magnetic resonance imaging as a promising setting for this technology.

  15. Template-based de novo design for type II kinase inhibitors and its extented application to acetylcholinesterase inhibitors.

    PubMed

    Su, Bo-Han; Huang, Yi-Syuan; Chang, Chia-Yun; Tu, Yi-Shu; Tseng, Yufeng J

    2013-10-31

    There is a compelling need to discover type II inhibitors targeting the unique DFG-out inactive kinase conformation since they are likely to possess greater potency and selectivity relative to traditional type I inhibitors. Using a known inhibitor, such as a currently available and approved drug or inhibitor, as a template to design new drugs via computational de novo design is helpful when working with known ligand-receptor interactions. This study proposes a new template-based de novo design protocol to discover new inhibitors that preserve and also optimize the binding interactions of the type II kinase template. First, sorafenib (Nexavar) and nilotinib (Tasigna), two type II inhibitors with different ligand-receptor interactions, were selected as the template compounds. The five-step protocol can reassemble each drug from a large fragment library. Our procedure demonstrates that the selected template compounds can be successfully reassembled while the key ligand-receptor interactions are preserved. Furthermore, to demonstrate that the algorithm is able to construct more potent compounds, we considered kinase inhibitors and other protein dataset, acetylcholinesterase (AChE) inhibitors. The de novo optimization was initiated using a template compound possessing a less than optimal activity from a series of aminoisoquinoline and TAK-285 inhibiting type II kinases, and E2020 derivatives inhibiting AChE respectively. Three compounds with greater potency than the template compound were discovered that were also included in the original congeneric series. This template-based lead optimization protocol with the fragment library can help to design compounds with preferred binding interactions of known inhibitors automatically and further optimize the compounds in the binding pockets.

  16. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    USGS Publications Warehouse

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  17. In Vitro Fertilization with Isolated, Single Gametes Results in Zygotic Embryogenesis and Fertile Maize Plants.

    PubMed Central

    Kranz, E; Lorz, H

    1993-01-01

    We demonstrate here the possibility of regenerating phenotypically normal, fertile maize plants via in vitro fertilization of isolated, single sperm and egg cells mediated by electrofusion. The technique leads to the highly efficient formation of polar zygotes, globular structures, proembryos, and transition-phase embryos and to the formation of plants from individually cultured fusion products. Regeneration of plants occurs via embryogenesis and occasionally by polyembryony and organogenesis. Flowering plants can be obtained within 100 days of gamete fusion. Regenerated plants were studied by karyological and morphological analyses, and the segregation of kernel color was determined. The hybrid nature of the plants was confirmed. PMID:12271084

  18. Generation of insulin-producing cells from human bone marrow-derived mesenchymal stem cells: comparison of three differentiation protocols.

    PubMed

    Gabr, Mahmoud M; Zakaria, Mahmoud M; Refaie, Ayman F; Khater, Sherry M; Ashamallah, Sylvia A; Ismail, Amani M; El-Badri, Nagwa; Ghoneim, Mohamed A

    2014-01-01

    Many protocols were utilized for directed differentiation of mesenchymal stem cells (MSCs) to form insulin-producing cells (IPCs). We compared the relative efficiency of three differentiation protocols. Human bone marrow-derived MSCs (HBM-MSCs) were obtained from three insulin-dependent type 2 diabetic patients. Differentiation into IPCs was carried out by three protocols: conophylline-based (one-step protocol), trichostatin-A-based (two-step protocol), and β -mercaptoethanol-based (three-step protocol). At the end of differentiation, cells were evaluated by immunolabeling for insulin production, expression of pancreatic endocrine genes, and release of insulin and c-peptide in response to increasing glucose concentrations. By immunolabeling, the proportion of generated IPCs was modest ( ≃ 3%) in all the three protocols. All relevant pancreatic endocrine genes, insulin, glucagon, and somatostatin, were expressed. There was a stepwise increase in insulin and c-peptide release in response to glucose challenge, but the released amounts were low when compared with those of pancreatic islets. The yield of functional IPCs following directed differentiation of HBM-MSCs was modest and was comparable among the three tested protocols. Protocols for directed differentiation of MSCs need further optimization in order to be clinically meaningful. To this end, addition of an extracellular matrix and/or a suitable template should be attempted.

  19. Novel Multiplex Fluorescent PCR-Based Method for HLA Typing and Preimplantational Genetic Diagnosis of β-Thalassemia.

    PubMed

    Khosravi, Sharifeh; Salehi, Mansour; Ramezanzadeh, Mahboobeh; Mirzaei, Hamed; Salehi, Rasoul

    2016-05-01

    Thalassemia is curable by bone marrow transplantation; however, finding suitable donors with defined HLA combination remains a major challenge. Cord blood stem cells with preselected HLA system through preimplantation genetic diagnosis (PGD) proved very useful for resolving scarce HLA-matched bone marrow donors. A thalassemia trait couple with an affected child was included in this study. We used informative STR markers at the HLA and beta globin loci to develop a single cell multiplex fluorescent PCR protocol. The protocol was extensively optimized on single lymphocytes isolated from the couple's peripheral blood. The optimized protocol was applied on single blastomeres biopsied from day 3 cleavage stage IVF embryos of the couple. Four IVF embryos biopsied on day 3 and a single blastomere of each were provided for genetic diagnosis of combined β-thalassemia mutations and HLA typing. Of these, one embryo was diagnosed as homozygous normal for the thalassemia mutation and HLA matched with the existing affected sibling. The optimized protocol worked well in PGD clinical cycle for selection of thalassemia-unaffected embryos with the desired HLA system. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.

  20. A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks

    PubMed Central

    Costa, Daniel G.; Guedes, Luiz Affonso

    2011-01-01

    Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908

  1. Design and Methodological Considerations of the Centers for Disease Control and Prevention Urologic and Renal Protocol for the Newborn and Young Child with Spina Bifida

    PubMed Central

    Routh, Jonathan C.; Cheng, Earl Y.; Austin, J. Christopher; Baum, Michelle A.; Gargollo, Patricio C.; Grady, Richard W.; Herron, Adrienne R.; Kim, Steven S.; King, Shelly J.; Koh, Chester J.; Paramsothy, Pangaja; Raman, Lisa; Schechter, Michael S.; Smith, Kathryn A.; Tanaka, Stacy T.; Thibadeau, Judy K.; Walker, William O.; Wallis, M. Chad; Wiener, John S.; Joseph, David B.

    2016-01-01

    Purpose Care of children with spina bifida has significantly advanced in the last half century, resulting in gains in longevity and quality of life for affected children and caregivers. Bladder dysfunction is the norm in patients with spina bifida and may result in infection, renal scarring and chronic kidney disease. However, the optimal urological management for spina bifida related bladder dysfunction is unknown. Materials and Methods In 2012 the Centers for Disease Control and Prevention convened a working group composed of pediatric urologists, nephrologists, epidemiologists, methodologists, community advocates and Centers for Disease Control and Prevention personnel to develop a protocol to optimize urological care of children with spina bifida from the newborn period through age 5 years. Results An iterative quality improvement protocol was selected. In this model participating institutions agree to prospectively treat all newborns with spina bifida using a single consensus based protocol. During the 5-year study period outcomes will be routinely assessed and the protocol adjusted as needed to optimize patient and process outcomes. Primary study outcomes include urinary tract infections, renal scarring, renal function and bladder characteristics. The protocol specifies the timing and use of testing (eg ultrasonography, urodynamics) and interventions (eg intermittent catheterization, prophylactic antibiotics, antimuscarinic medications). Starting in 2014 the Centers for Disease Control and Prevention began funding 9 study sites to implement and evaluate the protocol. Conclusions The Centers for Disease Control and Prevention Urologic and Renal Protocol for the Newborn and Young Child with Spina Bifida began accruing patients in 2015. Assessment in the first 5 years will focus on urinary tract infections, renal function, renal scarring and clinical process improvements. PMID:27475969

  2. Design and Methodological Considerations of the Centers for Disease Control and Prevention Urologic and Renal Protocol for the Newborn and Young Child with Spina Bifida.

    PubMed

    Routh, Jonathan C; Cheng, Earl Y; Austin, J Christopher; Baum, Michelle A; Gargollo, Patricio C; Grady, Richard W; Herron, Adrienne R; Kim, Steven S; King, Shelly J; Koh, Chester J; Paramsothy, Pangaja; Raman, Lisa; Schechter, Michael S; Smith, Kathryn A; Tanaka, Stacy T; Thibadeau, Judy K; Walker, William O; Wallis, M Chad; Wiener, John S; Joseph, David B

    2016-12-01

    Care of children with spina bifida has significantly advanced in the last half century, resulting in gains in longevity and quality of life for affected children and caregivers. Bladder dysfunction is the norm in patients with spina bifida and may result in infection, renal scarring and chronic kidney disease. However, the optimal urological management for spina bifida related bladder dysfunction is unknown. In 2012 the Centers for Disease Control and Prevention convened a working group composed of pediatric urologists, nephrologists, epidemiologists, methodologists, community advocates and Centers for Disease Control and Prevention personnel to develop a protocol to optimize urological care of children with spina bifida from the newborn period through age 5 years. An iterative quality improvement protocol was selected. In this model participating institutions agree to prospectively treat all newborns with spina bifida using a single consensus based protocol. During the 5-year study period outcomes will be routinely assessed and the protocol adjusted as needed to optimize patient and process outcomes. Primary study outcomes include urinary tract infections, renal scarring, renal function and bladder characteristics. The protocol specifies the timing and use of testing (eg ultrasonography, urodynamics) and interventions (eg intermittent catheterization, prophylactic antibiotics, antimuscarinic medications). Starting in 2014 the Centers for Disease Control and Prevention began funding 9 study sites to implement and evaluate the protocol. The Centers for Disease Control and Prevention Urologic and Renal Protocol for the Newborn and Young Child with Spina Bifida began accruing patients in 2015. Assessment in the first 5 years will focus on urinary tract infections, renal function, renal scarring and clinical process improvements. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  3. Time-saving design of experiment protocol for optimization of LC-MS data processing in metabolomic approaches.

    PubMed

    Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine

    2013-08-06

    We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.

  4. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    PubMed

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  5. A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.

    PubMed

    Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani

    2012-01-01

    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

  6. CHARMM-GUI Input Generator for NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM Simulations Using the CHARMM36 Additive Force Field

    DOE PAGES

    Lee, Jumin; Cheng, Xi; Swails, Jason M.; ...

    2015-11-12

    Here we report that proper treatment of nonbonded interactions is essential for the accuracy of molecular dynamics (MD) simulations, especially in studies of lipid bilayers. The use of the CHARMM36 force field (C36 FF) in different MD simulation programs can result in disagreements with published simulations performed with CHARMM due to differences in the protocols used to treat the long-range and 1-4 nonbonded interactions. In this study, we systematically test the use of the C36 lipid FF in NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM. A wide range of Lennard-Jones (LJ) cutoff schemes and integrator algorithms were tested to find themore » optimal simulation protocol to best match bilayer properties of six lipids with varying acyl chain saturation and head groups. MD simulations of a 1,2-dipalmitoyl-sn-phosphatidylcholine (DPPC) bilayer were used to obtain the optimal protocol for each program. MD simulations with all programs were found to reasonably match the DPPC bilayer properties (surface area per lipid, chain order parameters, and area compressibility modulus) obtained using the standard protocol used in CHARMM as well as from experiments. The optimal simulation protocol was then applied to the other five lipid simulations and resulted in excellent agreement between results from most simulation programs as well as with experimental data. AMBER compared least favorably with the expected membrane properties, which appears to be due to its use of the hard-truncation in the LJ potential versus a force-based switching function used to smooth the LJ potential as it approaches the cutoff distance. The optimal simulation protocol for each program has been implemented in CHARMM-GUI. This protocol is expected to be applicable to the remainder of the additive C36 FF including the proteins, nucleic acids, carbohydrates, and small molecules.« less

  7. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, K; Aldoohan, S; Collier, J

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measuremore » CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.« less

  8. CHARMM-GUI Input Generator for NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM Simulations Using the CHARMM36 Additive Force Field.

    PubMed

    Lee, Jumin; Cheng, Xi; Swails, Jason M; Yeom, Min Sun; Eastman, Peter K; Lemkul, Justin A; Wei, Shuai; Buckner, Joshua; Jeong, Jong Cheol; Qi, Yifei; Jo, Sunhwan; Pande, Vijay S; Case, David A; Brooks, Charles L; MacKerell, Alexander D; Klauda, Jeffery B; Im, Wonpil

    2016-01-12

    Proper treatment of nonbonded interactions is essential for the accuracy of molecular dynamics (MD) simulations, especially in studies of lipid bilayers. The use of the CHARMM36 force field (C36 FF) in different MD simulation programs can result in disagreements with published simulations performed with CHARMM due to differences in the protocols used to treat the long-range and 1-4 nonbonded interactions. In this study, we systematically test the use of the C36 lipid FF in NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM. A wide range of Lennard-Jones (LJ) cutoff schemes and integrator algorithms were tested to find the optimal simulation protocol to best match bilayer properties of six lipids with varying acyl chain saturation and head groups. MD simulations of a 1,2-dipalmitoyl-sn-phosphatidylcholine (DPPC) bilayer were used to obtain the optimal protocol for each program. MD simulations with all programs were found to reasonably match the DPPC bilayer properties (surface area per lipid, chain order parameters, and area compressibility modulus) obtained using the standard protocol used in CHARMM as well as from experiments. The optimal simulation protocol was then applied to the other five lipid simulations and resulted in excellent agreement between results from most simulation programs as well as with experimental data. AMBER compared least favorably with the expected membrane properties, which appears to be due to its use of the hard-truncation in the LJ potential versus a force-based switching function used to smooth the LJ potential as it approaches the cutoff distance. The optimal simulation protocol for each program has been implemented in CHARMM-GUI. This protocol is expected to be applicable to the remainder of the additive C36 FF including the proteins, nucleic acids, carbohydrates, and small molecules.

  9. Optimal molecular profiling of tissue and tissue components: defining the best processing and microdissection methods for biomedical applications.

    PubMed

    Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Su, Gloria H; Emmert-Buck, Michael R

    2005-02-01

    Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This article reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies, and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing, and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high quality, appropriately anatomically tagged scientific results. In optimized protocols is a source of inefficiency in current life science research. Improvement in this area will significantly increase life science quality and productivity. The article is divided into introduction, materials, protocols, and notes sections. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this article, readers are advised to read through the entire article first, identify protocols appropriate to their laboratory for each step in their workflow, and then reread entries in each section pertaining to each of these single protocols.

  10. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  11. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  12. Evaluation of parameters affecting switchgrass tissue culture: toward a consolidated procedure for Agrobacterium-mediated transformation of switchgrass (Panicum virgatum)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chien-Yuan; Donohoe, Bryon S.; Ahuja, Neha

    Switchgrass (Panicum virgatum), a robust perennial C4-type grass, has been evaluated and designated as a model bioenergy crop by the U.S. DOE and USDA. Conventional breeding of switchgrass biomass is difficult because it displays self-incompatible hindrance. Therefore, direct genetic modifications of switchgrass have been considered the more effective approach to tailor switchgrass with traits of interest. Successful transformations have demonstrated increased biomass yields, reduction in the recalcitrance of cell walls and enhanced saccharification efficiency. Several tissue culture protocols have been previously described to produce transgenic switchgrass lines using different nutrient-based media, co-cultivation approaches, and antibiotic strengths for selection. After evaluatingmore » the published protocols, we consolidated these approaches and optimized the process to develop a more efficient protocol for producing transgenic switchgrass. First, seed sterilization was optimized, which led to a 20% increase in yield of induced calluses. Second, we have selected a N 6 macronutrient/B 5 micronutrient (NB)-based medium for callus induction from mature seeds of the Alamo cultivar, and chose a Murashige and Skoog-based medium to regenerate both Type I and Type II calluses. Third, Agrobacterium-mediated transformation was adopted that resulted in 50-100% positive regenerated transformants after three rounds (2 weeks/round) of selection with antibiotic. Genomic DNA PCR, RT-PCR, Southern blot, visualization of the red fluorescent protein and histochemical β-glucuronidase (GUS) staining were conducted to confirm the positive switchgrass transformants. The optimized methods developed here provide an improved strategy to promote the production and selection of callus and generation of transgenic switchgrass lines. The process for switchgrass transformation has been evaluated and consolidated to devise an improved approach for transgenic switchgrass production. With the optimization of seed sterilization, callus induction, and regeneration steps, a reliable and effective protocol is established to facilitate switchgrass engineering.« less

  13. Evaluation of parameters affecting switchgrass tissue culture: toward a consolidated procedure for Agrobacterium-mediated transformation of switchgrass (Panicum virgatum)

    DOE PAGES

    Lin, Chien-Yuan; Donohoe, Bryon S.; Ahuja, Neha; ...

    2017-12-19

    Switchgrass (Panicum virgatum), a robust perennial C4-type grass, has been evaluated and designated as a model bioenergy crop by the U.S. DOE and USDA. Conventional breeding of switchgrass biomass is difficult because it displays self-incompatible hindrance. Therefore, direct genetic modifications of switchgrass have been considered the more effective approach to tailor switchgrass with traits of interest. Successful transformations have demonstrated increased biomass yields, reduction in the recalcitrance of cell walls and enhanced saccharification efficiency. Several tissue culture protocols have been previously described to produce transgenic switchgrass lines using different nutrient-based media, co-cultivation approaches, and antibiotic strengths for selection. After evaluatingmore » the published protocols, we consolidated these approaches and optimized the process to develop a more efficient protocol for producing transgenic switchgrass. First, seed sterilization was optimized, which led to a 20% increase in yield of induced calluses. Second, we have selected a N 6 macronutrient/B 5 micronutrient (NB)-based medium for callus induction from mature seeds of the Alamo cultivar, and chose a Murashige and Skoog-based medium to regenerate both Type I and Type II calluses. Third, Agrobacterium-mediated transformation was adopted that resulted in 50-100% positive regenerated transformants after three rounds (2 weeks/round) of selection with antibiotic. Genomic DNA PCR, RT-PCR, Southern blot, visualization of the red fluorescent protein and histochemical β-glucuronidase (GUS) staining were conducted to confirm the positive switchgrass transformants. The optimized methods developed here provide an improved strategy to promote the production and selection of callus and generation of transgenic switchgrass lines. The process for switchgrass transformation has been evaluated and consolidated to devise an improved approach for transgenic switchgrass production. With the optimization of seed sterilization, callus induction, and regeneration steps, a reliable and effective protocol is established to facilitate switchgrass engineering.« less

  14. Determining the optimal load for jump squats: a review of methods and calculations.

    PubMed

    Dugan, Eric L; Doyle, Tim L A; Humphries, Brendan; Hasson, Christopher J; Newton, Robert U

    2004-08-01

    There has been an increasing volume of research focused on the load that elicits maximum power output during jump squats. Because of a lack of standardization for data collection and analysis protocols, results of much of this research are contradictory. The purpose of this paper is to examine why differing methods of data collection and analysis can lead to conflicting results for maximum power and associated optimal load. Six topics relevant to measurement and reporting of maximum power and optimal load are addressed: (a) data collection equipment, (b) inclusion or exclusion of body weight force in calculations of power, (c) free weight versus Smith machine jump squats, (d) reporting of average versus peak power, (e) reporting of load intensity, and (f) instructions given to athletes/ participants. Based on this information, a standardized protocol for data collection and reporting of jump squat power and optimal load is presented.

  15. Cross-layer protocols optimized for real-time multimedia services in energy-constrained mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2003-07-01

    Mobile ad hoc networking (MANET) supports self-organizing, mobile infrastructures and enables an autonomous network of mobile nodes that can operate without a wired backbone. Ad hoc networks are characterized by multihop, wireless connectivity via packet radios and by the need for efficient dynamic protocols. All routers are mobile and can establish connectivity with other nodes only when they are within transmission range. Importantly, ad hoc wireless nodes are resource-constrained, having limited processing, memory, and battery capacity. Delivery of high quality-ofservice (QoS), real-time multimedia services from Internet-based applications over a MANET is a challenge not yet achieved by proposed Internet Engineering Task Force (IETF) ad hoc network protocols in terms of standard performance metrics such as end-to-end throughput, packet error rate, and delay. In the distributed operations of route discovery and maintenance, strong interaction occurs across MANET protocol layers, in particular, the physical, media access control (MAC), network, and application layers. The QoS requirements are specified for the service classes by the application layer. The cross-layer design must also satisfy the battery-limited energy constraints, by minimizing the distributed power consumption at the nodes and of selected routes. Interactions across the layers are modeled in terms of the set of concatenated design parameters including associated energy costs. Functional dependencies of the QoS metrics are described in terms of the concatenated control parameters. New cross-layer designs are sought that optimize layer interdependencies to achieve the "best" QoS available in an energy-constrained, time-varying network. The protocol design, based on a reactive MANET protocol, adapts the provisioned QoS to dynamic network conditions and residual energy capacities. The cross-layer optimization is based on stochastic dynamic programming conditions derived from time-dependent models of MANET packet flows. Regulation of network behavior is modeled by the optimal control of the conditional rates of multivariate point processes (MVPPs); these rates depend on the concatenated control parameters through a change of probability measure. The MVPP models capture behavior of many service applications, e.g., voice, video and the self-similar behavior of Internet data sessions. Performance verification of the cross-layer protocols, derived from the dynamic programming conditions, can be achieved by embedding the conditions in a reactive routing protocol for MANETs, in a simulation environment, such as the wireless extension of ns-2. A canonical MANET scenario consists of a distributed collection of battery-powered laptops or hand-held terminals, capable of hosting multimedia applications. Simulation details and performance tradeoffs, not presented, remain for a sequel to the paper.

  16. A rapid and efficient SDS-based RNA isolation protocol from different tissues of coffee.

    PubMed

    Huded, Arun Kumar C; Jingade, Pavankumar; Mishra, Manoj Kumar

    2018-03-01

    Isolation of high-quality RNA from coffee is challenging because of high level of polysaccharides, polyphenols and other secondary metabolites. In the present study, a rapid and efficient RNA extraction protocol from different tissues of coffee was optimized. Sufficiently high quality and quantity (225.6-454.8 µg/g) of RNA was obtained by using the optimized protocol. The presence of two distinct bands of 28S rRNA and 18S rRNA in agarose gel proved the intactness of the RNA samples. The average spectrophotometric values of the isolated RNA ranged from 1.96 to 2.02 ( A 260/280 ) and 1.95 to 2.14 ( A 260/230 ), indicating the high quality of RNA devoid of polyphenols, polysaccharides and protein contamination. In the optimized protocol, addition of PVPP to the extraction buffer and a brief incubation of samples at 65 °C and subsequent purification with potassium acetate resulted in good-quality RNA isolation. The suitability of RNA for downstream processing was confirmed by PCR amplification with cytochrome c oxidase gene-specific primers. The amplification of a single 392 bp fragment using cDNA and 1.5 kb fragment using genomic DNA samples confirmed the absence of DNA contamination. The present protocol is rapid and yielded good quality and quantity of RNA suitable for functional genomics studies.

  17. Reducing radiation dose to the female breast during conventional and dedicated breast computed tomography

    NASA Astrophysics Data System (ADS)

    Rupcich, Franco John

    The purpose of this study was to quantify the effectiveness of techniques intended to reduce dose to the breast during CT coronary angiography (CTCA) scans with respect to task-based image quality, and to evaluate the effectiveness of optimal energy weighting in improving contrast-to-noise ratio (CNR), and thus the potential for reducing breast dose, during energy-resolved dedicated breast CT. A database quantifying organ dose for several radiosensitive organs irradiated during CTCA, including the breast, was generated using Monte Carlo simulations. This database facilitates estimation of organ-specific dose deposited during CTCA protocols using arbitrary x-ray spectra or tube-current modulation schemes without the need to run Monte Carlo simulations. The database was used to estimate breast dose for simulated CT images acquired for a reference protocol and five protocols intended to reduce breast dose. For each protocol, the performance of two tasks (detection of signals with unknown locations) was compared over a range of breast dose levels using a task-based, signal-detectability metric: the estimator of the area under the exponential free-response relative operating characteristic curve, AFE. For large-diameter/medium-contrast signals, when maintaining equivalent AFE, the 80 kV partial, 80 kV, 120 kV partial, and 120 kV tube-current modulated protocols reduced breast dose by 85%, 81%, 18%, and 6%, respectively, while the shielded protocol increased breast dose by 68%. Results for the small-diameter/high-contrast signal followed similar trends, but with smaller magnitude of the percent changes in dose. The 80 kV protocols demonstrated the greatest reduction to breast dose, however, the subsequent increase in noise may be clinically unacceptable. Tube output for these protocols can be adjusted to achieve more desirable noise levels with lesser dose reduction. The improvement in CNR of optimally projection-based and image-based weighted images relative to photon-counting was investigated for six different energy bin combinations using a bench-top energy-resolving CT system with a cadmium zinc telluride (CZT) detector. The non-ideal spectral response reduced the CNR for the projection-based weighted images, while image-based weighting improved CNR for five out of the six investigated bin combinations, despite this non-ideal response, indicating potential for image-based weighting to reduce breast dose during dedicated breast CT.

  18. Directed differentiation of embryonic stem cells using a bead-based combinatorial screening method.

    PubMed

    Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen

    2014-01-01

    We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported.

  19. Bellman Ford algorithm - in Routing Information Protocol (RIP)

    NASA Astrophysics Data System (ADS)

    Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah

    2018-04-01

    In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.

  20. An optimized immunohistochemistry protocol for detecting the guidance cue Netrin-1 in neural tissue.

    PubMed

    Salameh, Samer; Nouel, Dominique; Flores, Cecilia; Hoops, Daniel

    2018-01-01

    Netrin-1, an axon guidance protein, is difficult to detect using immunohistochemistry. We performed a multi-step, blinded, and controlled protocol optimization procedure to establish an efficient and effective fluorescent immunohistochemistry protocol for characterizing Netrin-1 expression. Coronal mouse brain sections were used to test numerous antigen retrieval methods and combinations thereof in order to optimize the stain quality of a commercially available Netrin-1 antibody. Stain quality was evaluated by experienced neuroanatomists for two criteria: signal intensity and signal-to-noise ratio. After five rounds of testing protocol variants, we established a modified immunohistochemistry protocol that produced a Netrin-1 signal with good signal intensity and a high signal-to-noise ratio. The key protocol modifications are as follows: •Use phosphate buffer (PB) as the blocking solution solvent.•Use 1% sodium dodecyl sulfate (SDS) treatment for antigen retrieval. The original protocol was optimized for use with the Netrin-1 antibody produced by Novus Biologicals. However, we subsequently further modified the protocol to work with the antibody produced by Abcam. The Abcam protocol uses PBS as the blocking solution solvent and adds a citrate buffer antigen retrieval step.

  1. Nasal irrigation: From empiricism to evidence-based medicine. A review.

    PubMed

    Bastier, P-L; Lechot, A; Bordenave, L; Durand, M; de Gabory, L

    2015-11-01

    Nasal irrigation plays a non-negligible role in the treatment of numerous sinonasal pathologies and postoperative care. There is, however, a wide variety of protocols. The present review of the evidence-based literature sought objective arguments for optimization and efficacy. It emerged that large-volume low-pressure nasal douche optimizes the distribution and cleansing power of the irrigation solution in the nasal cavity. Ionic composition and pH also influence mucociliary clearance and epithelium trophicity. Seawater is less rich in sodium ions and richer in bicarbonates, potassium, calcium and magnesium than is isotonic normal saline, while alkaline pH and elevated calcium concentration optimized ciliary motility in vitro. Bicarbonates reduce secretion viscosity. Potassium and magnesium promote healing and limit local inflammation. These results show that the efficacy of nasal irrigation is multifactorial. Large-volume low-pressure nasal irrigation using undiluted seawater seems, in the present state of knowledge, to be the most effective protocol. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  2. Triage and optimization: A new paradigm in the treatment of massive pulmonary embolism.

    PubMed

    Pasrija, Chetan; Shah, Aakash; George, Praveen; Kronfli, Anthony; Raithel, Maxwell; Boulos, Francesca; Ghoreishi, Mehrdad; Bittle, Gregory J; Mazzeffi, Michael A; Rubinson, Lewis; Gammie, James S; Griffith, Bartley P; Kon, Zachary N

    2018-04-07

    Massive pulmonary embolism (PE) remains a highly fatal condition. Although venoarterial extracorporeal membrane oxygenation (VA-ECMO) and surgical pulmonary embolectomy in the management of massive PE have been reported previously, the outcomes remain less than ideal. We hypothesized that the institution of a protocolized approach of triage and optimization using VA-ECMO would result in improved outcomes compared with historical surgical management. All patients with a massive PE referred to the cardiac surgery service between 2010 and 2017 were retrospectively reviewed. Patients were stratified by treatment strategy: historical control versus the protocolized approach. In the historical control group, the primary intervention was surgical pulmonary embolectomy. In the protocol approach group, patients were treated based on an algorithmic approach using VA-ECMO. The primary outcome was 1-year survival. A total of 56 patients (control, n = 27; protocol, n = 29) were identified. All 27 patients in the historical control group underwent surgical pulmonary embolectomy, whereas 2 of 29 patients in the protocol approach group were deemed appropriate for direct surgical pulmonary embolectomy. The remaining 27 patients were placed on VA-ECMO. In the protocol approach group, 15 of 29 patients were treated with anticoagulation alone and 14 patients ultimately required surgical pulmonary embolectomy. One-year survival was significantly lower in the historical control group compared with the protocol approach group (73% vs 96%; P = .02), with no deaths occurring after surgical pulmonary embolectomy in the protocol approach group. A protocolized strategy involving the aggressive institution of VA-ECMO appears to be an effective method to triage and optimize patients with massive PE to recovery or intervention. Implementation of this strategy rather than an aggressive surgical approach may reduce the mortality associated with massive PE. Copyright © 2018 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  3. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  4. Assisted closed-loop optimization of SSVEP-BCI efficiency

    PubMed Central

    Fernandez-Vargas, Jacobo; Pfaff, Hanns U.; Rodríguez, Francisco B.; Varona, Pablo

    2012-01-01

    We designed a novel assisted closed-loop optimization protocol to improve the efficiency of brain-computer interfaces (BCI) based on steady state visually evoked potentials (SSVEP). In traditional paradigms, the control over the BCI-performance completely depends on the subjects' ability to learn from the given feedback cues. By contrast, in the proposed protocol both the subject and the machine share information and control over the BCI goal. Generally, the innovative assistance consists in the delivery of online information together with the online adaptation of BCI stimuli properties. In our case, this adaptive optimization process is realized by (1) a closed-loop search for the best set of SSVEP flicker frequencies and (2) feedback of actual SSVEP magnitudes to both the subject and the machine. These closed-loop interactions between subject and machine are evaluated in real-time by continuous measurement of their efficiencies, which are used as online criteria to adapt the BCI control parameters. The proposed protocol aims to compensate for variability in possibly unknown subjects' state and trait dimensions. In a study with N = 18 subjects, we found significant evidence that our protocol outperformed classic SSVEP-BCI control paradigms. Evidence is presented that it takes indeed into account interindividual variabilities: e.g., under the new protocol, baseline resting state EEG measures predict subjects' BCI performances. This paper illustrates the promising potential of assisted closed-loop protocols in BCI systems. Probably their applicability might be expanded to innovative uses, e.g., as possible new diagnostic/therapeutic tools for clinical contexts and as new paradigms for basic research. PMID:23443214

  5. Assisted closed-loop optimization of SSVEP-BCI efficiency.

    PubMed

    Fernandez-Vargas, Jacobo; Pfaff, Hanns U; Rodríguez, Francisco B; Varona, Pablo

    2013-01-01

    We designed a novel assisted closed-loop optimization protocol to improve the efficiency of brain-computer interfaces (BCI) based on steady state visually evoked potentials (SSVEP). In traditional paradigms, the control over the BCI-performance completely depends on the subjects' ability to learn from the given feedback cues. By contrast, in the proposed protocol both the subject and the machine share information and control over the BCI goal. Generally, the innovative assistance consists in the delivery of online information together with the online adaptation of BCI stimuli properties. In our case, this adaptive optimization process is realized by (1) a closed-loop search for the best set of SSVEP flicker frequencies and (2) feedback of actual SSVEP magnitudes to both the subject and the machine. These closed-loop interactions between subject and machine are evaluated in real-time by continuous measurement of their efficiencies, which are used as online criteria to adapt the BCI control parameters. The proposed protocol aims to compensate for variability in possibly unknown subjects' state and trait dimensions. In a study with N = 18 subjects, we found significant evidence that our protocol outperformed classic SSVEP-BCI control paradigms. Evidence is presented that it takes indeed into account interindividual variabilities: e.g., under the new protocol, baseline resting state EEG measures predict subjects' BCI performances. This paper illustrates the promising potential of assisted closed-loop protocols in BCI systems. Probably their applicability might be expanded to innovative uses, e.g., as possible new diagnostic/therapeutic tools for clinical contexts and as new paradigms for basic research.

  6. In Situ Chemical Oxidation for Groundwater Remediation: Site-Specific Engineering & Technology Application

    DTIC Science & Technology

    2010-10-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Colorado School of Mines,1500 Illinois St, Golden ,CO,80401 8. PERFORMING ORGANIZATION REPORT NUMBER 9...Protocol page 13 Overall ISCO Protocol Flow Diagram addition, laboratory studies may be used to select optimal chemistry parameters to maximize oxidant...Design Process 5. Because of the complexity of these oxidants’ chemistry and implementation, with much of the knowledge base residing with those

  7. Optimization of the primary recovery of human interferon alpha2b from Escherichia coli inclusion bodies.

    PubMed

    Valente, C A; Monteiro, G A; Cabral, J M S; Fevereiro, M; Prazeres, D M F

    2006-01-01

    The human interferon alpha2b (hu-IFNalpha2b) gene was cloned in Escherichia coli JM109(DE3) and the recombinant protein was expressed as cytoplasmic inclusion bodies (IB). The present work discusses the recovery of hu-IFNalpha2b IB from the E. coli cells. An optimized protocol is proposed based on the sequential evaluation of recovery steps and parameters: (i) cell disruption, (ii) IB recovery and separation from cell debris, (iii) IB washing, and (iv) IB solubilization. Parameters such as hu-IFNalpha2b purity and recovery yield were measured after each step. The optimized recovery protocol yielded 60% of hu-IFNalpha2b with a purity of up to 80%. The protein was renatured at high concentration after recovery and it was found to display biological activity.

  8. Directed Differentiation of Embryonic Stem Cells Using a Bead-Based Combinatorial Screening Method

    PubMed Central

    Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J.; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen

    2014-01-01

    We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported. PMID:25251366

  9. Security of two-state and four-state practical quantum bit-commitment protocols

    NASA Astrophysics Data System (ADS)

    Loura, Ricardo; Arsenović, Dušan; Paunković, Nikola; Popović, Duška B.; Prvanović, Slobodan

    2016-12-01

    We study cheating strategies against a practical four-state quantum bit-commitment protocol [A. Danan and L. Vaidman, Quant. Info. Proc. 11, 769 (2012)], 10.1007/s11128-011-0284-4 and its two-state variant [R. Loura et al., Phys. Rev. A 89, 052336 (2014)], 10.1103/PhysRevA.89.052336 when the underlying quantum channels are noisy and the cheating party is constrained to using single-qubit measurements only. We show that simply inferring the transmitted photons' states by using the Breidbart basis, optimal for ambiguous (minimum-error) state discrimination, does not directly produce an optimal cheating strategy for this bit-commitment protocol. We introduce a strategy, based on certain postmeasurement processes and show it to have better chances at cheating than the direct approach. We also study to what extent sending forged geographical coordinates helps a dishonest party in breaking the binding security requirement. Finally, we investigate the impact of imperfect single-photon sources in the protocols. Our study shows that, in terms of the resources used, the four-state protocol is advantageous over the two-state version. The analysis performed can be straightforwardly generalized to any finite-qubit measurement, with the same qualitative results.

  10. An Energy Balanced and Lifetime Extended Routing Protocol for Underwater Sensor Networks.

    PubMed

    Wang, Hao; Wang, Shilian; Zhang, Eryang; Lu, Luxi

    2018-05-17

    Energy limitation is an adverse problem in designing routing protocols for underwater sensor networks (UWSNs). To prolong the network lifetime with limited battery power, an energy balanced and efficient routing protocol, called energy balanced and lifetime extended routing protocol (EBLE), is proposed in this paper. The proposed EBLE not only balances traffic loads according to the residual energy, but also optimizes data transmissions by selecting low-cost paths. Two phases are operated in the EBLE data transmission process: (1) candidate forwarding set selection phase and (2) data transmission phase. In candidate forwarding set selection phase, nodes update candidate forwarding nodes by broadcasting the position and residual energy level information. The cost value of available nodes is calculated and stored in each sensor node. Then in data transmission phase, high residual energy and relatively low-cost paths are selected based on the cost function and residual energy level information. We also introduce detailed analysis of optimal energy consumption in UWSNs. Numerical simulation results on a variety of node distributions and data load distributions prove that EBLE outperforms other routing protocols (BTM, BEAR and direct transmission) in terms of network lifetime and energy efficiency.

  11. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  12. A comprehensive study on the relationship between the image quality and imaging dose in low-dose cone beam CT

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.

    2012-04-01

    While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4) The clinically acceptable lowest imaging dose level is task dependent. In our study, 72.8 mAs is a safe dose level for visualizing low-contrast objects, while 12.2 total mAs is sufficient for detecting high-contrast objects of diameter greater than 3 mm.

  13. An Adaptive OFDMA-Based MAC Protocol for Underwater Acoustic Wireless Sensor Networks

    PubMed Central

    Khalil, Issa M.; Gadallah, Yasser; Hayajneh, Mohammad; Khreishah, Abdallah

    2012-01-01

    Underwater acoustic wireless sensor networks (UAWSNs) have many applications across various civilian and military domains. However, they suffer from the limited available bandwidth of acoustic signals and harsh underwater conditions. In this work, we present an Orthogonal Frequency Division Multiple Access (OFDMA)-based Media Access Control (MAC) protocol that is configurable to suit the operating requirements of the underwater sensor network. The protocol has three modes of operation, namely random, equal opportunity and energy-conscious modes of operation. Our MAC design approach exploits the multi-path characteristics of a fading acoustic channel to convert it into parallel independent acoustic sub-channels that undergo flat fading. Communication between node pairs within the network is done using subsets of these sub-channels, depending on the configurations of the active mode of operation. Thus, the available limited bandwidth gets fully utilized while completely avoiding interference. We derive the mathematical model for optimal power loading and subcarrier selection, which is used as basis for all modes of operation of the protocol. We also conduct many simulation experiments to evaluate and compare our protocol with other Code Division Multiple Access (CDMA)-based MAC protocols. PMID:23012517

  14. An adaptive OFDMA-based MAC protocol for underwater acoustic wireless sensor networks.

    PubMed

    Khalil, Issa M; Gadallah, Yasser; Hayajneh, Mohammad; Khreishah, Abdallah

    2012-01-01

    Underwater acoustic wireless sensor networks (UAWSNs) have many applications across various civilian and military domains. However, they suffer from the limited available bandwidth of acoustic signals and harsh underwater conditions. In this work, we present an Orthogonal Frequency Division Multiple Access (OFDMA)-based Media Access Control (MAC) protocol that is configurable to suit the operating requirements of the underwater sensor network. The protocol has three modes of operation, namely random, equal opportunity and energy-conscious modes of operation. Our MAC design approach exploits the multi-path characteristics of a fading acoustic channel to convert it into parallel independent acoustic sub-channels that undergo flat fading. Communication between node pairs within the network is done using subsets of these sub-channels, depending on the configurations of the active mode of operation. Thus, the available limited bandwidth gets fully utilized while completely avoiding interference. We derive the mathematical model for optimal power loading and subcarrier selection, which is used as basis for all modes of operation of the protocol. We also conduct many simulation experiments to evaluate and compare our protocol with other Code Division Multiple Access (CDMA)-based MAC protocols.

  15. A Secure Routing Protocol for Wireless Sensor Networks Considering Secure Data Aggregation.

    PubMed

    Rahayu, Triana Mugia; Lee, Sang-Gon; Lee, Hoon-Jae

    2015-06-26

    The commonly unattended and hostile deployments of WSNs and their resource-constrained sensor devices have led to an increasing demand for secure energy-efficient protocols. Routing and data aggregation receive the most attention since they are among the daily network routines. With the awareness of such demand, we found that so far there has been no work that lays out a secure routing protocol as the foundation for a secure data aggregation protocol. We argue that the secure routing role would be rendered useless if the data aggregation scheme built on it is not secure. Conversely, the secure data aggregation protocol needs a secure underlying routing protocol as its foundation in order to be effectively optimal. As an attempt for the solution, we devise an energy-aware protocol based on LEACH and ESPDA that combines secure routing protocol and secure data aggregation protocol. We then evaluate its security effectiveness and its energy-efficiency aspects, knowing that there are always trade-off between both.

  16. A Secure Routing Protocol for Wireless Sensor Networks Considering Secure Data Aggregation

    PubMed Central

    Rahayu, Triana Mugia; Lee, Sang-Gon; Lee, Hoon-Jae

    2015-01-01

    The commonly unattended and hostile deployments of WSNs and their resource-constrained sensor devices have led to an increasing demand for secure energy-efficient protocols. Routing and data aggregation receive the most attention since they are among the daily network routines. With the awareness of such demand, we found that so far there has been no work that lays out a secure routing protocol as the foundation for a secure data aggregation protocol. We argue that the secure routing role would be rendered useless if the data aggregation scheme built on it is not secure. Conversely, the secure data aggregation protocol needs a secure underlying routing protocol as its foundation in order to be effectively optimal. As an attempt for the solution, we devise an energy-aware protocol based on LEACH and ESPDA that combines secure routing protocol and secure data aggregation protocol. We then evaluate its security effectiveness and its energy-efficiency aspects, knowing that there are always trade-off between both. PMID:26131669

  17. Deanne W. Sammond | NREL

    Science.gov Websites

    Interactions," Journal of Biomolecular Structure & Dynamics (2009) "Structure-Based Protocol for from left to right with several dots of multiple colors. "Cellulase Linkers Are Optimized Based on the Sequence and Structure of a Protein-Binding Peptide," Journal of the American Chemical

  18. Cure Cycle Design Methodology for Fabricating Reactive Resin Matrix Fiber Reinforced Composites: A Protocol for Producing Void-free Quality Laminates

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2014-01-01

    For the fabrication of resin matrix fiber reinforced composite laminates, a workable cure cycle (i.e., temperature and pressure profiles as a function of processing time) is needed and is critical for achieving void-free laminate consolidation. Design of such a cure cycle is not trivial, especially when dealing with reactive matrix resins. An empirical "trial and error" approach has been used as common practice in the composite industry. Such an approach is not only costly, but also ineffective at establishing the optimal processing conditions for a specific resin/fiber composite system. In this report, a rational "processing science" based approach is established, and a universal cure cycle design protocol is proposed. Following this protocol, a workable and optimal cure cycle can be readily and rationally designed for most reactive resin systems in a cost effective way. This design protocol has been validated through experimental studies of several reactive polyimide composites for a wide spectrum of usage that has been documented in the previous publications.

  19. A Network Coding Based Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks

    PubMed Central

    Wang, Hao; Wang, Shilian; Zhang, Eryang; Zou, Jianbin

    2016-01-01

    Underwater Acoustic Sensor Networks (UASNs) have attracted increasing interest in recent years due to their extensive commercial and military applications. However, the harsh underwater channel causes many challenges for the design of reliable underwater data transport protocol. In this paper, we propose an energy efficient data transport protocol based on network coding and hybrid automatic repeat request (NCHARQ) to ensure reliability, efficiency and availability in UASNs. Moreover, an adaptive window length estimation algorithm is designed to optimize the throughput and energy consumption tradeoff. The algorithm can adaptively change the code rate and can be insensitive to the environment change. Extensive simulations and analysis show that NCHARQ significantly reduces energy consumption with short end-to-end delay. PMID:27618044

  20. Design of optimal hyperthermia protocols for prostate cancer by controlling HSP expression through computer modeling (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Rylander, Marissa N.; Feng, Yusheng; Diller, Kenneth; Bass, J.

    2005-04-01

    Heat shock proteins (HSP) are critical components of a complex defense mechanism essential for preserving cell survival under adverse environmental conditions. It is inevitable that hyperthermia will enhance tumor tissue viability, due to HSP expression in regions where temperatures are insufficient to coagulate proteins, and would likely increase the probability of cancer recurrence. Although hyperthermia therapy is commonly used in conjunction with radiotherapy, chemotherapy, and gene therapy to increase therapeutic effectiveness, the efficacy of these therapies can be substantially hindered due to HSP expression when hyperthermia is applied prior to these procedures. Therefore, in planning hyperthermia protocols, prediction of the HSP response of the tumor must be incorporated into the treatment plan to optimize the thermal dose delivery and permit prediction of overall tissue response. In this paper, we present a highly accurate, adaptive, finite element tumor model capable of predicting the HSP expression distribution and tissue damage region based on measured cellular data when hyperthermia protocols are specified. Cubic spline representations of HSP27 and HSP70, and Arrhenius damage models were integrated into the finite element model to enable prediction of the HSP expression and damage distribution in the tissue following laser heating. Application of the model can enable optimized treatment planning by controlling of the tissue response to therapy based on accurate prediction of the HSP expression and cell damage distribution.

  1. Protein structure modeling for CASP10 by multiple layers of global optimization.

    PubMed

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  2. Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation

    NASA Astrophysics Data System (ADS)

    Choi, J.; Raguin, L. G.

    2010-10-01

    Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.

  3. Squeezed-state quantum key distribution with a Rindler observer

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Shi, Ronghua; Guo, Ying

    2018-03-01

    Lengthening the maximum transmission distance of quantum key distribution plays a vital role in quantum information processing. In this paper, we propose a directional squeezed-state protocol with signals detected by a Rindler observer in the relativistic quantum field framework. We derive an analytical solution to the transmission problem of squeezed states from the inertial sender to the accelerated receiver. The variance of the involved signal mode is closer to optimality than that of the coherent-state-based protocol. Simulation results show that the proposed protocol has better performance than the coherent-state counterpart especially in terms of the maximal transmission distance.

  4. Suppression of work fluctuations by optimal control: An approach based on Jarzynski's equality

    NASA Astrophysics Data System (ADS)

    Xiao, Gaoyang; Gong, Jiangbin

    2014-11-01

    Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, aspects of work fluctuations will be an important factor in designing nanoscale heat engines. In this work, an optimal control approach directly exploiting Jarzynski's equality is proposed to effectively suppress the fluctuations in the work statistics, for systems (initially at thermal equilibrium) subject to a work protocol but isolated from a bath during the protocol. The control strategy is to minimize the deviations of individual values of e-β W from their ensemble average given by e-β Δ F, where W is the work, β is the inverse temperature, and Δ F is the free energy difference between two equilibrium states. It is further shown that even when the system Hamiltonian is not fully known, it is still possible to suppress work fluctuations through a feedback loop, by refining the control target function on the fly through Jarzynski's equality itself. Numerical experiments are based on linear and nonlinear parametric oscillators. Optimal control results for linear parametric oscillators are also benchmarked with early results based on shortcuts to adiabaticity.

  5. Optimization and experimental validation of a thermal cycle that maximizes entropy coefficient fisher identifiability for lithium iron phosphate cells

    NASA Astrophysics Data System (ADS)

    Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam

    2016-03-01

    This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.

  6. Discrete Particle Swarm Optimization Routing Protocol for Wireless Sensor Networks with Multiple Mobile Sinks.

    PubMed

    Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming

    2016-07-14

    Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption.

  7. Broken symmetry in a two-qubit quantum control landscape

    NASA Astrophysics Data System (ADS)

    Bukov, Marin; Day, Alexandre G. R.; Weinberg, Phillip; Polkovnikov, Anatoli; Mehta, Pankaj; Sels, Dries

    2018-05-01

    We analyze the physics of optimal protocols to prepare a target state with high fidelity in a symmetrically coupled two-qubit system. By varying the protocol duration, we find a discontinuous phase transition, which is characterized by a spontaneous breaking of a Z2 symmetry in the functional form of the optimal protocol, and occurs below the quantum speed limit. We study in detail this phase and demonstrate that even though high-fidelity protocols come degenerate with respect to their fidelity, they lead to final states of different entanglement entropy shared between the qubits. Consequently, while globally both optimal protocols are equally far away from the target state, one is locally closer than the other. An approximate variational mean-field theory which captures the physics of the different phases is developed.

  8. Design of durability test protocol for vehicular fuel cell systems operated in power-follow mode based on statistical results of on-road data

    NASA Astrophysics Data System (ADS)

    Xu, Liangfei; Reimer, Uwe; Li, Jianqiu; Huang, Haiyan; Hu, Zunyan; Jiang, Hongliang; Janßen, Holger; Ouyang, Minggao; Lehnert, Werner

    2018-02-01

    City buses using polymer electrolyte membrane (PEM) fuel cells are considered to be the most likely fuel cell vehicles to be commercialized in China. The technical specifications of the fuel cell systems (FCSs) these buses are equipped with will differ based on the powertrain configurations and vehicle control strategies, but can generally be classified into the power-follow and soft-run modes. Each mode imposes different levels of electrochemical stress on the fuel cells. Evaluating the aging behavior of fuel cell stacks under the conditions encountered in fuel cell buses requires new durability test protocols based on statistical results obtained during actual driving tests. In this study, we propose a systematic design method for fuel cell durability test protocols that correspond to the power-follow mode based on three parameters for different fuel cell load ranges. The powertrain configurations and control strategy are described herein, followed by a presentation of the statistical data for the duty cycles of FCSs in one city bus in the demonstration project. Assessment protocols are presented based on the statistical results using mathematical optimization methods, and are compared to existing protocols with respect to common factors, such as time at open circuit voltage and root-mean-square power.

  9. Generation and customization of biosynthetic excitable tissues for electrophysiological studies and cell-based therapies.

    PubMed

    Nguyen, Hung X; Kirkton, Robert D; Bursac, Nenad

    2018-05-01

    We describe a two-stage protocol to generate electrically excitable and actively conducting cell networks with stable and customizable electrophysiological phenotypes. Using this method, we have engineered monoclonally derived excitable tissues as a robust and reproducible platform to investigate how specific ion channels and mutations affect action potential (AP) shape and conduction. In the first stage of the protocol, we combine computational modeling, site-directed mutagenesis, and electrophysiological techniques to derive optimal sets of mammalian and/or prokaryotic ion channels that produce specific AP shape and conduction characteristics. In the second stage of the protocol, selected ion channels are stably expressed in unexcitable human cells by means of viral or nonviral delivery, followed by flow cytometry or antibiotic selection to purify the desired phenotype. This protocol can be used with traditional heterologous expression systems or primary excitable cells, and application of this method to primary fibroblasts may enable an alternative approach to cardiac cell therapy. Compared with existing methods, this protocol generates a well-defined, relatively homogeneous electrophysiological phenotype of excitable cells that facilitates experimental and computational studies of AP conduction and can decrease arrhythmogenic risk upon cell transplantation. Although basic cell culture and molecular biology techniques are sufficient to generate excitable tissues using the described protocol, experience with patch-clamp techniques is required to characterize and optimize derived cell populations.

  10. Design of freeze-drying processes for pharmaceuticals: practical advice.

    PubMed

    Tang, Xiaolin; Pikal, Michael J

    2004-02-01

    Design of freeze-drying processes is often approached with a "trial and error" experimental plan or, worse yet, the protocol used in the first laboratory run is adopted without further attempts at optimization. Consequently, commercial freeze-drying processes are often neither robust nor efficient. It is our thesis that design of an "optimized" freeze-drying process is not particularly difficult for most products, as long as some simple rules based on well-accepted scientific principles are followed. It is the purpose of this review to discuss the scientific foundations of the freeze-drying process design and then to consolidate these principles into a set of guidelines for rational process design and optimization. General advice is given concerning common stability issues with proteins, but unusual and difficult stability issues are beyond the scope of this review. Control of ice nucleation and crystallization during the freezing step is discussed, and the impact of freezing on the rest of the process and final product quality is reviewed. Representative freezing protocols are presented. The significance of the collapse temperature and the thermal transition, denoted Tg', are discussed, and procedures for the selection of the "target product temperature" for primary drying are presented. Furthermore, guidelines are given for selection of the optimal shelf temperature and chamber pressure settings required to achieve the target product temperature without thermal and/or mass transfer overload of the freeze dryer. Finally, guidelines and "rules" for optimization of secondary drying and representative secondary drying protocols are presented.

  11. Automated electrotransformation of Escherichia coli on a digital microfluidic platform using bioactivated magnetic beads.

    PubMed

    Moore, J A; Nemat-Gorgani, M; Madison, A C; Sandahl, M A; Punnamaraju, S; Eckhardt, A E; Pollack, M G; Vigneault, F; Church, G M; Fair, R B; Horowitz, M A; Griffin, P B

    2017-01-01

    This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols.

  12. Automated electrotransformation of Escherichia coli on a digital microfluidic platform using bioactivated magnetic beads

    PubMed Central

    Moore, J. A.; Nemat-Gorgani, M.; Madison, A. C.; Punnamaraju, S.; Eckhardt, A. E.; Pollack, M. G.; Church, G. M.; Fair, R. B.; Horowitz, M. A.; Griffin, P. B.

    2017-01-01

    This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols. PMID:28191268

  13. A toxicity cost function approach to optimal CPA equilibration in tissues.

    PubMed

    Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali

    2018-02-01

    There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6  cm 2 /s, 1.2 × 10 -6  cm 2 /s and 1.3 × 10 -6  cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Moderate reagent mixing on an orbital shaker reduces the incubation time of enzyme-linked immunosorbent assay.

    PubMed

    Kumar, Saroj; Ahirwar, Rajesh; Rehman, Ishita; Nahar, Pradip

    2017-07-01

    Rapid diagnostic tests can be developed using ELISA for detection of diseases in emergency conditions. Conventional ELISA takes 1-2 days, making it unsuitable for rapid diagnostics. Here, we report the effect of reagents mixing via shaking or vortexing on the assay timing of ELISA. A 48-min protocol of ELISA involving 12-min incubations with reagent mixing at 750 rpm for every step was optimized. Contrary to this, time-optimized control ELISA performed without mixing produced similar results in 8 h, leaving a time gain of 7 h using the developed protocol. Collectively, the findings suggest the development of ELISA-based rapid diagnostics. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Visibility-Based Hypothesis Testing Using Higher-Order Optical Interference

    NASA Astrophysics Data System (ADS)

    Jachura, Michał; Jarzyna, Marcin; Lipka, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2018-03-01

    Many quantum information protocols rely on optical interference to compare data sets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate the binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.

  16. Efficiency Improvement in a Busy Radiology Practice: Determination of Musculoskeletal Magnetic Resonance Imaging Protocol Using Deep-Learning Convolutional Neural Networks.

    PubMed

    Lee, Young Han

    2018-04-04

    The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.

  17. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs

    PubMed Central

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-01-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network. PMID:28763014

  18. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs.

    PubMed

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-08-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network.

  19. Optimization of coronary attenuation in coronary computed tomography angiography using diluted contrast material.

    PubMed

    Kawaguchi, Naoto; Kurata, Akira; Kido, Teruhito; Nishiyama, Yoshiko; Kido, Tomoyuki; Miyagawa, Masao; Ogimoto, Akiyoshi; Mochizuki, Teruhito

    2014-01-01

    The purpose of this study was to evaluate a personalized protocol with diluted contrast material (CM) for coronary computed tomography angiography (CTA). One hundred patients with suspected coronary artery disease underwent retrospective electrocardiogram-gated coronary CTA on a 256-slice multidetector-row CT scanner. In the diluted CM protocol (n=50), the optimal scan timing and CM dilution rate were determined by the timing bolus scan, with 20% CM dilution (5ml/s during 10s) being considered suitable to achieve the target arterial attenuation of 350 Hounsfield units (HU). In the body weight (BW)-adjusted protocol (n=50, 222mg iodine/kg), only the optimal scan timing was determined by the timing bolus scan. The injection rate and volume in the timing bolus scan and real scan were identical between the 2 protocols. We compared the means and variations in coronary attenuation between the 2 protocols. Coronary attenuation (mean±SD) in the diluted CM and BW-adjusted protocols was 346.1±23.9 HU and 298.8±45.2 HU, respectively. The diluted CM protocol provided significantly higher coronary attenuation and lower variance than did the BW-adjusted protocol (P<0.05, in each). The diluted CM protocol facilitates more uniform attenuation on coronary CTA in comparison with the BW-adjusted protocol.  

  20. An Efficient Framework Model for Optimizing Routing Performance in VANETs.

    PubMed

    Al-Kharasani, Nori M; Zulkarnain, Zuriati Ahmad; Subramaniam, Shamala; Hanapi, Zurina Mohd

    2018-02-15

    Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED).

  1. Optimization of oligonucleotide arrays and RNA amplification protocols for analysis of transcript structure and alternative splicing.

    PubMed

    Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M

    2003-01-01

    Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing.

  2. Optimization of oligonucleotide arrays and RNA amplification protocols for analysis of transcript structure and alternative splicing

    PubMed Central

    Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M

    2003-01-01

    Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing. PMID:14519201

  3. Collective attacks and unconditional security in continuous variable quantum key distribution.

    PubMed

    Grosshans, Frédéric

    2005-01-21

    We present here an information theoretic study of Gaussian collective attacks on the continuous variable key distribution protocols based on Gaussian modulation of coherent states. These attacks, overlooked in previous security studies, give a finite advantage to the eavesdropper in the experimentally relevant lossy channel, but are not powerful enough to reduce the range of the reverse reconciliation protocols. Secret key rates are given for the ideal case where Bob performs optimal collective measurements, as well as for the realistic cases where he performs homodyne or heterodyne measurements. We also apply the generic security proof of Christiandl et al. to obtain unconditionally secure rates for these protocols.

  4. Droplet-based pyrosequencing using digital microfluidics.

    PubMed

    Boles, Deborah J; Benton, Jonathan L; Siew, Germaine J; Levy, Miriam H; Thwar, Prasanna K; Sandahl, Melissa A; Rouse, Jeremy L; Perkins, Lisa C; Sudarsan, Arjun P; Jalili, Roxana; Pamula, Vamsee K; Srinivasan, Vijay; Fair, Richard B; Griffin, Peter B; Eckhardt, Allen E; Pollack, Michael G

    2011-11-15

    The feasibility of implementing pyrosequencing chemistry within droplets using electrowetting-based digital microfluidics is reported. An array of electrodes patterned on a printed-circuit board was used to control the formation, transportation, merging, mixing, and splitting of submicroliter-sized droplets contained within an oil-filled chamber. A three-enzyme pyrosequencing protocol was implemented in which individual droplets contained enzymes, deoxyribonucleotide triphosphates (dNTPs), and DNA templates. The DNA templates were anchored to magnetic beads which enabled them to be thoroughly washed between nucleotide additions. Reagents and protocols were optimized to maximize signal over background, linearity of response, cycle efficiency, and wash efficiency. As an initial demonstration of feasibility, a portion of a 229 bp Candida parapsilosis template was sequenced using both a de novo protocol and a resequencing protocol. The resequencing protocol generated over 60 bp of sequence with 100% sequence accuracy based on raw pyrogram levels. Excellent linearity was observed for all of the homopolymers (two, three, or four nucleotides) contained in the C. parapsilosis sequence. With improvements in microfluidic design it is expected that longer reads, higher throughput, and improved process integration (i.e., "sample-to-sequence" capability) could eventually be achieved using this low-cost platform.

  5. Droplet-Based Pyrosequencing Using Digital Microfluidics

    PubMed Central

    Boles, Deborah J.; Benton, Jonathan L.; Siew, Germaine J.; Levy, Miriam H.; Thwar, Prasanna K.; Sandahl, Melissa A.; Rouse, Jeremy L.; Perkins, Lisa C.; Sudarsan, Arjun P.; Jalili, Roxana; Pamula, Vamsee K.; Srinivasan, Vijay; Fair, Richard B.; Griffin, Peter B.; Eckhardt, Allen E.; Pollack, Michael G.

    2013-01-01

    The feasibility of implementing pyrosequencing chemistry within droplets using electrowetting-based digital microfluidics is reported. An array of electrodes patterned on a printed-circuit board was used to control the formation, transportation, merging, mixing, and splitting of submicroliter-sized droplets contained within an oil-filled chamber. A three-enzyme pyrosequencing protocol was implemented in which individual droplets contained enzymes, deoxyribonucleotide triphosphates (dNTPs), and DNA templates. The DNA templates were anchored to magnetic beads which enabled them to be thoroughly washed between nucleotide additions. Reagents and protocols were optimized to maximize signal over background, linearity of response, cycle efficiency, and wash efficiency. As an initial demonstration of feasibility, a portion of a 229 bp Candida parapsilosis template was sequenced using both a de novo protocol and a resequencing protocol. The resequencing protocol generated over 60 bp of sequence with 100% sequence accuracy based on raw pyrogram levels. Excellent linearity was observed for all of the homopolymers (two, three, or four nucleotides) contained in the C. parapsilosis sequence. With improvements in microfluidic design it is expected that longer reads, higher throughput, and improved process integration (i.e., “sample-to-sequence” capability) could eventually be achieved using this low-cost platform. PMID:21932784

  6. AFLP Variation in Populations of Podisus maculiventris

    USDA-ARS?s Scientific Manuscript database

    We are developing methods to reduce costs of mass producing beneficial insect species for biological control programs. One of our methods entails selecting beneficials for optimal production traits. Currently we are selecting for increased fecundity. Selection protocols, whether based on phenotyp...

  7. Optimal port-based teleportation

    NASA Astrophysics Data System (ADS)

    Mozrzymas, Marek; Studziński, Michał; Strelchuk, Sergii; Horodecki, Michał

    2018-05-01

    Deterministic port-based teleportation (dPBT) protocol is a scheme where a quantum state is guaranteed to be transferred to another system without unitary correction. We characterise the best achievable performance of the dPBT when both the resource state and the measurement is optimised. Surprisingly, the best possible fidelity for an arbitrary number of ports and dimension of the teleported state is given by the largest eigenvalue of a particular matrix—Teleportation Matrix. It encodes the relationship between a certain set of Young diagrams and emerges as the optimal solution to the relevant semidefinite programme.

  8. Frozen embryo transfer: a review on the optimal endometrial preparation and timing.

    PubMed

    Mackens, S; Santos-Ribeiro, S; van de Vijver, A; Racca, A; Van Landuyt, L; Tournaye, H; Blockeel, C

    2017-11-01

    What is the optimal endometrial preparation protocol for a frozen embryo transfer (FET)? Although the optimal endometrial preparation protocol for FET needs further research and is yet to be determined, we propose a standardized timing strategy based on the current available evidence which could assist in the harmonization and comparability of clinic practice and future trials. Amid a continuous increase in the number of FET cycles, determining the optimal endometrial preparation protocol has become paramount to maximize ART success. In current daily practice, different FET preparation methods and timing strategies are used. This is a review of the current literature on FET preparation methods, with special attention to the timing of the embryo transfer. Literature on the topic was retrieved in PubMed and references from relevant articles were investigated until June 2017. The number of high quality randomized controlled trials (RCTs) is scarce and, hence, the evidence for the best protocol for FET is poor. Future research should compare both the pregnancy and neonatal outcomes between HRT and true natural cycle (NC) FET. In terms of embryo transfer timing, we propose to start progesterone intake on the theoretical day of oocyte retrieval in HRT and to perform blastocyst transfer at hCG + 7 or LH + 6 in modified or true NC, respectively. As only a few high quality RCTs on the optimal preparation for FET are available in the existing literature, no definitive conclusion for benefit of one protocol over the other can be drawn so far. Caution when using HRT for FET is warranted since the rate of early pregnancy loss is alarmingly high in some reports. S.M. is funded by the Research Fund of Flanders (FWO). H.T. and C.B. report grants from Merck, Goodlife, Besins and Abbott during the conduct of the study. Not applicable. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Somatic hybrid plants of Nicotiana x sanderae (+) N. debneyi with fungal resistance to Peronospora tabacina.

    PubMed

    Patel, Deval; Power, J Brian; Anthony, Paul; Badakshi, Farah; Pat Heslop-Harrison, J S; Davey, Michael R

    2011-10-01

    The genus Nicotiana includes diploid and tetraploid species, with complementary ecological, agronomic and commercial characteristics. The species are of economic value for tobacco, as ornamentals, and for secondary plant-product biosynthesis. They show substantial differences in disease resistance because of their range of secondary products. In the last decade, sexual hybridization and transgenic technologies have tended to eclipse protoplast fusion for gene transfer. Somatic hybridization was exploited in the present investigation to generate a new hybrid combination involving two sexually incompatible tetraploid species. The somatic hybrid plants were characterized using molecular, molecular cytogenetic and phenotypic approaches. Mesophyll protoplasts of the wild fungus-resistant species N. debneyi (2n = 4x = 48) were electrofused with those of the ornamental interspecific sexual hybrid N. × sanderae (2n = 2x = 18). From 1570 protoplast-derived cell colonies selected manually in five experiments, 580 tissues were sub-cultured to shoot regeneration medium. Regenerated plants were transferred to the glasshouse and screened for their morphology, chromosomal composition and disease resistance. Eighty-nine regenerated plants flowered; five were confirmed as somatic hybrids by their intermediate morphology compared with parental plants, cytological constitution and DNA-marker analysis. Somatic hybrid plants had chromosome complements of 60 or 62. Chromosomes were identified to parental genomes by genomic in situ hybridization and included all 18 chromosomes from N. × sanderae, and 42 or 44 chromosomes from N. debneyi. Four or six chromosomes of one ancestral genome of N. debneyi were eliminated during culture of electrofusion-treated protoplasts and plant regeneration. Both chloroplasts and mitochondria of the somatic hybrid plants were probably derived from N. debneyi. All somatic hybrid plants were fertile. In contrast to parental plants of N. × sanderae, the seed progeny of somatic hybrid plants were resistant to infection by Peronospora tabacina, a trait introgressed from the wild parent, N. debneyi. Sexual incompatibility between N. × sanderae and N. debneyi was circumvented by somatic hybridization involving protoplast fusion. Asymmetrical nuclear hybridity was seen in the hybrids with loss of chromosomes, although importantly, somatic hybrids were fertile and stable. Expression of fungal resistance makes these somatic hybrids extremely valuable germplasm in future breeding programmes in ornamental tobacco.

  10. Generation of Oligodendrogenic Spinal Neural Progenitor Cells From Human Induced Pluripotent Stem Cells.

    PubMed

    Khazaei, Mohamad; Ahuja, Christopher S; Fehlings, Michael G

    2017-08-14

    This unit describes protocols for the efficient generation of oligodendrogenic neural progenitor cells (o-NPCs) from human induced pluripotent stem cells (hiPSCs). Specifically, detailed methods are provided for the maintenance and differentiation of hiPSCs, human induced pluripotent stem cell-derived neural progenitor cells (hiPS-NPCs), and human induced pluripotent stem cell-oligodendrogenic neural progenitor cells (hiPSC-o-NPCs) with the final products being suitable for in vitro experimentation or in vivo transplantation. Throughout, cell exposure to growth factors and patterning morphogens has been optimized for both concentration and timing, based on the literature and empirical experience, resulting in a robust and highly efficient protocol. Using this derivation procedure, it is possible to obtain millions of oligodendrogenic-NPCs within 40 days of initial cell plating which is substantially shorter than other protocols for similar cell types. This protocol has also been optimized to use translationally relevant human iPSCs as the parent cell line. The resultant cells have been extensively characterized both in vitro and in vivo and express key markers of an oligodendrogenic lineage. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley and Sons, Inc.

  11. How to make optimal use of maximal multipartite entanglement in clock synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Changliang; Hofmann, Holger F.

    2014-12-04

    We introduce a multi-party quantum clock synchronization protocol that makes optimal use of the maximal multipartite entanglement of GHZ-type states. The measurement statistics of the protocol are analyzed and the efficiency is evaluated.

  12. A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.

    PubMed

    Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang

    2017-08-08

    Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.

  13. SU-F-I-46: Optimizing Dose Reduction in Adult Head CT Protocols While Maintaining Image Quality in Postmortem Head Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnharski, I; Carranza, C; Quails, N

    Purpose: To optimize adult head CT protocol by reducing dose to an appropriate level while providing CT images of diagnostic quality. Methods: Five cadavers were scanned from the skull base to the vertex using a routine adult head CT protocol (120 kVp, 270 mA, 0.75 s rotation, 0.5 mm × 32 detectors, 70.8 mGy CTDIvol) followed by seven reduced-dose protocols with varying combinations of reduced tube current, reduced rotation time, and increased detectors with CTDIvol ranging from 38.2 to 65.6 mGy. Organ doses were directly measured with 21 OSL dosimeters placed on the surface and implanted in the head bymore » a neurosurgeon. Two neuroradiologists assessed grey-white matter differentiation, fluid space, ventricular size, midline shift, brain mass, edema, ischemia, and skull fractures on a three point scale: (1) Unacceptable, (2) Borderline Acceptable, and (3) Acceptable. Results: For the standard scan, doses to the skin, lens of the eye, salivary glands, thyroid, and brain were 37.55 mGy, 49.65 mGy, 40.67 mGy, 4.63 mGy, and 27.33 mGy, respectively. Two cadavers had cerebral edema due to changing dynamics of postmortem effects, causing the grey-white matter differentiation to appear less distinct. Two cadavers with preserved grey-white matter received acceptable scores for all image quality features for the protocol with a CTDIvol of 57.3 mGy, allowing organ dose savings ranging from 34% to 45%. One cadaver allowed for greater dose reduction for the protocol with a CTDIvol of 42 mGy. Conclusion: Efforts to optimize scan protocol should consider both dose and clinical image quality. This is made possible with postmortem subjects, whose brains are similar to patients, allowing for an investigation of ideal scan parameters. Radiologists at our institution accepted scan protocols acquired with lower scan parameters, with CTDIvol values closer to the American College of Radiology’s (ACR) Achievable Dose level of 57 mGy.« less

  14. SPOT: Optimization Tool for Network Adaptable Security

    NASA Astrophysics Data System (ADS)

    Ksiezopolski, Bogdan; Szalachowski, Pawel; Kotulski, Zbigniew

    Recently we have observed the growth of the intelligent application especially with its mobile character, called e-anything. The implementation of these applications provides guarantee of security requirements of the cryptographic protocols which are used in the application. Traditionally the protocols have been configured with the strongest possible security mechanisms. Unfortunately, when the application is used by means of the mobile devices, the strongest protection can lead to the denial of services for them. The solution of this problem is introducing the quality of protection models which will scale the protection level depending on the actual threat level. In this article we would like to introduce the application which manages the protection level of the processes in the mobile environment. The Security Protocol Optimizing Tool (SPOT) optimizes the cryptographic protocol and defines the protocol version appropriate to the actual threat level. In this article the architecture of the SPOT is presented with a detailed description of the included modules.

  15. Comparison of different tissue clearing methods and 3D imaging techniques for visualization of GFP-expressing mouse embryos and embryonic hearts.

    PubMed

    Kolesová, Hana; Čapek, Martin; Radochová, Barbora; Janáček, Jiří; Sedmera, David

    2016-08-01

    Our goal was to find an optimal tissue clearing protocol for whole-mount imaging of embryonic and adult hearts and whole embryos of transgenic mice that would preserve green fluorescent protein GFP fluorescence and permit comparison of different currently available 3D imaging modalities. We tested various published organic solvent- or water-based clearing protocols intended to preserve GFP fluorescence in central nervous system: tetrahydrofuran dehydration and dibenzylether protocol (DBE), SCALE, CLARITY, and CUBIC and evaluated their ability to render hearts and whole embryos transparent. DBE clearing protocol did not preserve GFP fluorescence; in addition, DBE caused considerable tissue-shrinking artifacts compared to the gold standard BABB protocol. The CLARITY method considerably improved tissue transparency at later stages, but also decreased GFP fluorescence intensity. The SCALE clearing resulted in sufficient tissue transparency up to ED12.5; at later stages the useful depth of imaging was limited by tissue light scattering. The best method for the cardiac specimens proved to be the CUBIC protocol, which preserved GFP fluorescence well, and cleared the specimens sufficiently even at the adult stages. In addition, CUBIC decolorized the blood and myocardium by removing tissue iron. Good 3D renderings of whole fetal hearts and embryos were obtained with optical projection tomography and selective plane illumination microscopy, although at resolutions lower than with a confocal microscope. Comparison of five tissue clearing protocols and three imaging methods for study of GFP mouse embryos and hearts shows that the optimal method depends on stage and level of detail required.

  16. Sensitivity regularization of the Cramér-Rao lower bound to minimize B1 nonuniformity effects in quantitative magnetization transfer imaging.

    PubMed

    Boudreau, Mathieu; Pike, G Bruce

    2018-05-07

    To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.

  17. Evaluation of similarity measures for use in the intensity-based rigid 2D-3D registration for patient positioning in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Jian; Kim, Minho; Peters, Jorg

    2009-12-15

    Purpose: Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl's similarity measure evaluation protocol. Methods: The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, themore » similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph (DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient. Results: Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a stricter termination condition for an optimization method. The phantom study using MV imaging indicated that PI, GD, and GC have the best accuracy, while NCC and NMI have the best robustness. The clinical study using MV imaging showed that NCC and NMI have the best robustness. Conclusions: The authors evaluated the performance of seven similarity measures for use in 2D-3D image registration using the variation in Skerl's similarity measure evaluation protocol. The generalized methodology can be used to select the best similarity measures, determine the optimal or near optimal choice of parameter, and choose the appropriate registration strategy for the end user in his specific registration applications in medical imaging.« less

  18. Discrete Particle Swarm Optimization Routing Protocol for Wireless Sensor Networks with Multiple Mobile Sinks

    PubMed Central

    Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming

    2016-01-01

    Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle’s position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971

  19. Isolation and purification of rabbit mesenchymal stem cells using an optimized protocol.

    PubMed

    Lin, Chunbo; Shen, Maorong; Chen, Weiping; Li, Xiaofeng; Luo, Daoming; Cai, Jinhong; Yang, Yuan

    2015-11-01

    Mesenchymal stem cells were first isolated and grown in vitro by Friedenstein over 40 yr ago; however, their isolation remains challenging as they lack unique markers for identification and are present in very small quantities in mesenchymal tissues and bone marrow. Using whole marrow samples, common methods for mesenchymal stem cell isolation are the adhesion method and density gradient fractionation. The whole marrow sample adhesion method still results in the nonspecific isolation of mononuclear cells, and activation and/or potential loss of target cells. Density gradient fractionation methods are complicated, and may result in contamination with toxic substances that affect cell viability. In the present study, we developed an optimized protocol for the isolation and purification of mesenchymal stem cells based on the principles of hypotonic lysis and natural sedimentation.

  20. An update on the use of massive transfusion protocols in obstetrics.

    PubMed

    Pacheco, Luis D; Saade, George R; Costantine, Maged M; Clark, Steven L; Hankins, Gary D V

    2016-03-01

    Obstetrical hemorrhage remains a leading cause of maternal mortality worldwide. New concepts involving the pathophysiology of hemorrhage have been described and include early activation of both the protein C and fibrinolytic pathways. New strategies in hemorrhage treatment include the use of hemostatic resuscitation, although the optimal ratio to administer the various blood products is still unknown. Massive transfusion protocols involve the early utilization of blood products and limit the traditional approach of early massive crystalloid-based resuscitation. The evidence behind hemostatic resuscitation has changed in the last few years, and debate is ongoing regarding optimal transfusion strategies. The use of tranexamic acid, fibrinogen concentrates, and prothrombin complex concentrates has emerged as new potential alternative treatment strategies with improved safety profiles. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Optimization of protocol design: a path to efficient, lower cost clinical trial execution

    PubMed Central

    Malikova, Marina A

    2016-01-01

    Managing clinical trials requires strategic planning and efficient execution. In order to achieve a timely delivery of important clinical trials’ outcomes, it is useful to establish standardized trial management guidelines and develop robust scoring methodology for evaluation of study protocol complexity. This review will explore the challenges clinical teams face in developing protocols to ensure that the right patients are enrolled and the right data are collected to demonstrate that a drug is safe and efficacious, while managing study costs and study complexity based on proposed comprehensive scoring model. Key factors to consider when developing protocols and techniques to minimize complexity will be discussed. A methodology to identify processes at planning phase, approaches to increase fiscal return and mitigate fiscal compliance risk for clinical trials will be addressed. PMID:28031939

  2. Peptide-MHC Class I Tetramers Can Fail To Detect Relevant Functional T Cell Clonotypes and Underestimate Antigen-Reactive T Cell Populations.

    PubMed

    Rius, Cristina; Attaf, Meriem; Tungatt, Katie; Bianchi, Valentina; Legut, Mateusz; Bovay, Amandine; Donia, Marco; Thor Straten, Per; Peakman, Mark; Svane, Inge Marie; Ott, Sascha; Connor, Tom; Szomolay, Barbara; Dolton, Garry; Sewell, Andrew K

    2018-04-01

    Peptide-MHC (pMHC) multimers, usually used as streptavidin-based tetramers, have transformed the study of Ag-specific T cells by allowing direct detection, phenotyping, and enumeration within polyclonal T cell populations. These reagents are now a standard part of the immunology toolkit and have been used in many thousands of published studies. Unfortunately, the TCR-affinity threshold required for staining with standard pMHC multimer protocols is higher than that required for efficient T cell activation. This discrepancy makes it possible for pMHC multimer staining to miss fully functional T cells, especially where low-affinity TCRs predominate, such as in MHC class II-restricted responses or those directed against self-antigens. Several recent, somewhat alarming, reports indicate that pMHC staining might fail to detect the majority of functional T cells and have prompted suggestions that T cell immunology has become biased toward the type of cells amenable to detection with multimeric pMHC. We use several viral- and tumor-specific pMHC reagents to compare populations of human T cells stained by standard pMHC protocols and optimized protocols that we have developed. Our results confirm that optimized protocols recover greater populations of T cells that include fully functional T cell clonotypes that cannot be stained by regular pMHC-staining protocols. These results highlight the importance of using optimized procedures that include the use of protein kinase inhibitor and Ab cross-linking during staining to maximize the recovery of Ag-specific T cells and serve to further highlight that many previous quantifications of T cell responses with pMHC reagents are likely to have considerably underestimated the size of the relevant populations. Copyright © 2018 The Authors.

  3. Towards an improved LAI collection protocol via simulated field-based PAR sensing

    DOE PAGES

    Yao, Wei; Van Leeuwen, Martin; Romanczyk, Paul; ...

    2016-07-14

    In support of NASA’s next-generation spectrometer—the Hyperspectral Infrared Imager (HyspIRI)—we are working towards assessing sub-pixel vegetation structure from imaging spectroscopy data. Of particular interest is Leaf Area Index (LAI), which is an informative, yet notoriously challenging parameter to efficiently measure in situ. While photosynthetically-active radiation (PAR) sensors have been validated for measuring crop LAI, there is limited literature on the efficacy of PAR-based LAI measurement in the forest environment. This study (i) validates PAR-based LAI measurement in forest environments, and (ii) proposes a suitable collection protocol, which balances efficiency with measurement variation, e.g., due to sun flecks and various-sized canopymore » gaps. A synthetic PAR sensor model was developed in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and used to validate LAI measurement based on first-principles and explicitly-known leaf geometry. Simulated collection parameters were adjusted to empirically identify optimal collection protocols. Furthermore, these collection protocols were then validated in the field by correlating PAR-based LAI measurement to the normalized difference vegetation index (NDVI) extracted from the “classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) data (R 2 was 0.61). The results indicate that our proposed collecting protocol is suitable for measuring the LAI of sparse forest (LAI < 3–5 ( m 2/m 2)).« less

  4. A market-based optimization approach to sensor and resource management

    NASA Astrophysics Data System (ADS)

    Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.

    2006-05-01

    Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.

  5. Potential Projective Material on the Rorschach: Comparing Comprehensive System Protocols to Their Modeled R-Optimized Administration Counterparts.

    PubMed

    Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de

    2016-01-01

    Exner ( 1989 ) and Weiner ( 2003 ) identified 3 types of Rorschach codes that are most likely to contain personally relevant projective material: Distortions, Movement, and Embellishments. We examine how often these types of codes occur in normative data and whether their frequency changes for the 1st, 2nd, 3rd, 4th, or last response to a card. We also examine the impact on these variables of the Rorschach Performance Assessment System's (R-PAS) statistical modeling procedures that convert the distribution of responses (R) from Comprehensive System (CS) administered protocols to match the distribution of R found in protocols obtained using R-optimized administration guidelines. In 2 normative reference databases, the results indicated that about 40% of responses (M = 39.25) have 1 type of code, 15% have 2 types, and 1.5% have all 3 types, with frequencies not changing by response number. In addition, there were no mean differences in the original CS and R-optimized modeled records (M Cohen's d = -0.04 in both databases). When considered alongside findings showing minimal differences between the protocols of people randomly assigned to CS or R-optimized administration, the data suggest R-optimized administration should not alter the extent to which potential projective material is present in a Rorschach protocol.

  6. Accounting for patient size in the optimization of dose and image quality of pelvis cone beam CT protocols on the Varian OBI system.

    PubMed

    Wood, Tim J; Moore, Craig S; Horsfield, Carl J; Saunderson, John R; Beavis, Andrew W

    2015-01-01

    The purpose of this study was to develop size-based radiotherapy kilovoltage cone beam CT (CBCT) protocols for the pelvis. Image noise was measured in an elliptical phantom of varying size for a range of exposure factors. Based on a previously defined "small pelvis" reference patient and CBCT protocol, appropriate exposure factors for small, medium, large and extra-large patients were derived which approximate the image noise behaviour observed on a Philips CT scanner (Philips Medical Systems, Best, Netherlands) with automatic exposure control (AEC). Selection criteria, based on maximum tube current-time product per rotation selected during the radiotherapy treatment planning scan, were derived based on an audit of patient size. It has been demonstrated that 110 kVp yields acceptable image noise for reduced patient dose in pelvic CBCT scans of small, medium and large patients, when compared with manufacturer's default settings (125 kVp). Conversely, extra-large patients require increased exposure factors to give acceptable images. 57% of patients in the local population now receive much lower radiation doses, whereas 13% require higher doses (but now yield acceptable images). The implementation of size-based exposure protocols has significantly reduced radiation dose to the majority of patients with no negative impact on image quality. Increased doses are required on the largest patients to give adequate image quality. The development of size-based CBCT protocols that use the planning CT scan (with AEC) to determine which protocol is appropriate ensures adequate image quality whilst minimizing patient radiation dose.

  7. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE PAGES

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...

    2017-05-18

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  8. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  9. Aggregating quantum repeaters for the quantum internet

    NASA Astrophysics Data System (ADS)

    Azuma, Koji; Kato, Go

    2017-09-01

    The quantum internet holds promise for accomplishing quantum teleportation and unconditionally secure communication freely between arbitrary clients all over the globe, as well as the simulation of quantum many-body systems. For such a quantum internet protocol, a general fundamental upper bound on the obtainable entanglement or secret key has been derived [K. Azuma, A. Mizutani, and H.-K. Lo, Nat. Commun. 7, 13523 (2016), 10.1038/ncomms13523]. Here we consider its converse problem. In particular, we present a universal protocol constructible from any given quantum network, which is based on running quantum repeater schemes in parallel over the network. For arbitrary lossy optical channel networks, our protocol has no scaling gap with the upper bound, even based on existing quantum repeater schemes. In an asymptotic limit, our protocol works as an optimal entanglement or secret-key distribution over any quantum network composed of practical channels such as erasure channels, dephasing channels, bosonic quantum amplifier channels, and lossy optical channels.

  10. Network-Friendly Gossiping

    NASA Astrophysics Data System (ADS)

    Serbu, Sabina; Rivière, Étienne; Felber, Pascal

    The emergence of large-scale distributed applications based on many-to-many communication models, e.g., broadcast and decentralized group communication, has an important impact on the underlying layers, notably the Internet routing infrastructure. To make an effective use of network resources, protocols should both limit the stress (amount of messages) on each infrastructure entity like routers and links, and balance as much as possible the load in the network. Most protocols use application-level metrics such as delays to improve efficiency of content dissemination or routing, but the extend to which such application-centric optimizations help reduce and balance the load imposed to the infrastructure is unclear. In this paper, we elaborate on the design of such network-friendly protocols and associated metrics. More specifically, we investigate random-based gossip dissemination. We propose and evaluate different ways of making this representative protocol network-friendly while keeping its desirable properties (robustness and low delays). Simulations of the proposed methods using synthetic and real network topologies convey and compare their abilities to reduce and balance the load while keeping good performance.

  11. Optimization and evaluation of T7 based RNA linear amplification protocols for cDNA microarray analysis

    PubMed Central

    Zhao, Hongjuan; Hastie, Trevor; Whitfield, Michael L; Børresen-Dale, Anne-Lise; Jeffrey, Stefanie S

    2002-01-01

    Background T7 based linear amplification of RNA is used to obtain sufficient antisense RNA for microarray expression profiling. We optimized and systematically evaluated the fidelity and reproducibility of different amplification protocols using total RNA obtained from primary human breast carcinomas and high-density cDNA microarrays. Results Using an optimized protocol, the average correlation coefficient of gene expression of 11,123 cDNA clones between amplified and unamplified samples is 0.82 (0.85 when a virtual array was created using repeatedly amplified samples to minimize experimental variation). Less than 4% of genes show changes in expression level by 2-fold or greater after amplification compared to unamplified samples. Most changes due to amplification are not systematic both within one tumor sample and between different tumors. Amplification appears to dampen the variation of gene expression for some genes when compared to unamplified poly(A)+ RNA. The reproducibility between repeatedly amplified samples is 0.97 when performed on the same day, but drops to 0.90 when performed weeks apart. The fidelity and reproducibility of amplification is not affected by decreasing the amount of input total RNA in the 0.3–3 micrograms range. Adding template-switching primer, DNA ligase, or column purification of double-stranded cDNA does not improve the fidelity of amplification. The correlation coefficient between amplified and unamplified samples is higher when total RNA is used as template for both experimental and reference RNA amplification. Conclusion T7 based linear amplification reproducibly generates amplified RNA that closely approximates original sample for gene expression profiling using cDNA microarrays. PMID:12445333

  12. QoS and energy aware cooperative routing protocol for wildfire monitoring wireless sensor networks.

    PubMed

    Maalej, Mohamed; Cherif, Sofiane; Besbes, Hichem

    2013-01-01

    Wireless sensor networks (WSN) are presented as proper solution for wildfire monitoring. However, this application requires a design of WSN taking into account the network lifetime and the shadowing effect generated by the trees in the forest environment. Cooperative communication is a promising solution for WSN which uses, at each hop, the resources of multiple nodes to transmit its data. Thus, by sharing resources between nodes, the transmission quality is enhanced. In this paper, we use the technique of reinforcement learning by opponent modeling, optimizing a cooperative communication protocol based on RSSI and node energy consumption in a competitive context (RSSI/energy-CC), that is, an energy and quality-of-service aware-based cooperative communication routing protocol. Simulation results show that the proposed algorithm performs well in terms of network lifetime, packet delay, and energy consumption.

  13. Coherence in quantum estimation

    NASA Astrophysics Data System (ADS)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  14. Self-adaptive trust based ABR protocol for MANETs using Q-learning.

    PubMed

    Kumar, Anitha Vijaya; Jeyapal, Akilandeswari

    2014-01-01

    Mobile ad hoc networks (MANETs) are a collection of mobile nodes with a dynamic topology. MANETs work under scalable conditions for many applications and pose different security challenges. Due to the nomadic nature of nodes, detecting misbehaviour is a complex problem. Nodes also share routing information among the neighbours in order to find the route to the destination. This requires nodes to trust each other. Thus we can state that trust is a key concept in secure routing mechanisms. A number of cryptographic protection techniques based on trust have been proposed. Q-learning is a recently used technique, to achieve adaptive trust in MANETs. In comparison to other machine learning computational intelligence techniques, Q-learning achieves optimal results. Our work focuses on computing a score using Q-learning to weigh the trust of a particular node over associativity based routing (ABR) protocol. Thus secure and stable route is calculated as a weighted average of the trust value of the nodes in the route and associativity ticks ensure the stability of the route. Simulation results show that Q-learning based trust ABR protocol improves packet delivery ratio by 27% and reduces the route selection time by 40% over ABR protocol without trust calculation.

  15. Self-Adaptive Trust Based ABR Protocol for MANETs Using Q-Learning

    PubMed Central

    Jeyapal, Akilandeswari

    2014-01-01

    Mobile ad hoc networks (MANETs) are a collection of mobile nodes with a dynamic topology. MANETs work under scalable conditions for many applications and pose different security challenges. Due to the nomadic nature of nodes, detecting misbehaviour is a complex problem. Nodes also share routing information among the neighbours in order to find the route to the destination. This requires nodes to trust each other. Thus we can state that trust is a key concept in secure routing mechanisms. A number of cryptographic protection techniques based on trust have been proposed. Q-learning is a recently used technique, to achieve adaptive trust in MANETs. In comparison to other machine learning computational intelligence techniques, Q-learning achieves optimal results. Our work focuses on computing a score using Q-learning to weigh the trust of a particular node over associativity based routing (ABR) protocol. Thus secure and stable route is calculated as a weighted average of the trust value of the nodes in the route and associativity ticks ensure the stability of the route. Simulation results show that Q-learning based trust ABR protocol improves packet delivery ratio by 27% and reduces the route selection time by 40% over ABR protocol without trust calculation. PMID:25254243

  16. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    PubMed

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  17. Optimizing radiotherapy protocols using computer automata to model tumour cell death as a function of oxygen diffusion processes.

    PubMed

    Paul-Gilloteaux, Perrine; Potiron, Vincent; Delpon, Grégory; Supiot, Stéphane; Chiavassa, Sophie; Paris, François; Costes, Sylvain V

    2017-05-23

    The concept of hypofractionation is gaining momentum in radiation oncology centres, enabled by recent advances in radiotherapy apparatus. The gain of efficacy of this innovative treatment must be defined. We present a computer model based on translational murine data for in silico testing and optimization of various radiotherapy protocols with respect to tumour resistance and the microenvironment heterogeneity. This model combines automata approaches with image processing algorithms to simulate the cellular response of tumours exposed to ionizing radiation, modelling the alteration of oxygen permeabilization in blood vessels against repeated doses, and introducing mitotic catastrophe (as opposed to arbitrary delayed cell-death) as a means of modelling radiation-induced cell death. Published data describing cell death in vitro as well as tumour oxygenation in vivo are used to inform parameters. Our model is validated by comparing simulations to in vivo data obtained from the radiation treatment of mice transplanted with human prostate tumours. We then predict the efficacy of untested hypofractionation protocols, hypothesizing that tumour control can be optimized by adjusting daily radiation dosage as a function of the degree of hypoxia in the tumour environment. Further biological refinement of this tool will permit the rapid development of more sophisticated strategies for radiotherapy.

  18. Evaluation of a 15-week CHOP protocol for the treatment of canine multicentric lymphoma.

    PubMed

    Burton, J H; Garrett-Mayer, E; Thamm, D H

    2013-12-01

    Dose intense CHOP protocols have been shown to improve outcome for people with non-Hodgkin's lymphoma, but evaluation of dose intense CHOP protocols for canine lymphoma is currently limited. The hypothesis of this retrospective study was that a 15-week dose intense CHOP protocol would have shorter treatment duration with similar efficacy to other doxorubicin-based multidrug protocols. Thirty-one client owned dogs with multicentric lymphoma were treated with a 15-week CHOP chemotherapy protocol with an overall response rate of 100% and a median progression-free interval (PFI) of 140 days [95% confidence interval (CI) 91-335 days]. Dogs that had two or more treatment delays had significantly prolonged PFI and overall survival in multivariate analysis. Dose intensity did not correlate with patient outcome. Dogs experiencing multiple treatment delays secondary to adverse events may receive their individual maximally tolerated dose while dogs with no adverse events may be underdosed. Future studies should focus on individual patient dose optimization. © 2012 Blackwell Publishing Ltd.

  19. Optimal molecular profiling of tissue and tissue components: defining the best processing and microdissection methods for biomedical applications.

    PubMed

    Rodriguez-Canales, Jaime; Hanson, Jeffrey C; Hipp, Jason D; Balis, Ulysses J; Tangrea, Michael A; Emmert-Buck, Michael R; Bova, G Steven

    2013-01-01

    Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This updated chapter reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high-quality, appropriately anatomically tagged scientific results. Improvement in this area will significantly increase life science quality and productivity. The chapter is divided into introduction, materials, protocols, and notes subheadings. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this chapter, readers are advised to read through the entire chapter first, identify protocols appropriate to their laboratory for each step in their workflow, and then reread entries in each section pertaining to each of these single protocols.

  20. Pediatric Burn Reconstruction: Focus on Evidence.

    PubMed

    Fisher, Mark

    2017-10-01

    In this article, the author surveys the best available evidence to guide decision-making in pediatric burn reconstruction. Evidence-based protocols are examined in the context of optimizing form and function in children who have sustained burn injury. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Carb-3 is the superior anti-CD15 monoclonal antibody for immunohistochemistry.

    PubMed

    Røge, Rasmus; Nielsen, Søren; Vyberg, Mogens

    2014-07-01

    Immunohistochemical detection of CD15 is important in the diagnosis of Hodgkin lymphoma and may play a role in the classification of renal cell tumors (RCTs). In the NordiQC external quality assessment scheme, 4 CD15 tests, each with 71 to 121 participating laboratories, showed that 24% to 50% of the stains were insufficient. This was mainly because of very low primary antibody (Ab) concentration and insufficient heat-induced epitope retrieval, whereas the Ab clone performance seemed of little importance. The purpose of this study was to evaluate the performance of the most commonly used CD15 Abs on the basis of vendor-recommended and in-house optimized protocols. Multitissue blocks with 199 specimens including various malignant lymphomas, RCTs, and normal tissues were stained with 3 different concentrated (conc) CD15 Ab clones Carb-3, MMA, and BY87 according to predetermined in-house optimized protocols on 2 automated immunostaining platforms. Carb-3 and MMA were also applied in ready-to-use (RTU) formats utilized according to vendor protocols. Extension and intensity of stains was determined using the H-score method. Clone Carb-3-conc gave with an in-house optimized protocol the highest H-scores in Hodgkin lymphoma, RCTs, and normal kidney tissue. Clones Carb-3-RTU and MMA-conc gave slightly lower scores, whereas clones MMA-RTU and BY87-conc gave the lowest scores and a large proportion of false-negative reactions. For all concentrated Abs, in-house optimized protocols resulted in increased sensitivity and improved overall staining results compared with vendor-recommended protocols. The importance of Ab selection and protocol optimization in immunohistochemical laboratories is emphasized.

  2. Optimizing otoacoustic emission protocols for a UNHS program.

    PubMed

    Hatzopoulos, S; Petruccelli, J; Ciorba, A; Martini, A

    2009-01-01

    To identify the optimal test protocol to screen for hearing problems in newborns, an evaluation of three distortion product otoacoustic emission (DPOAE) protocols was conducted in neonates, from a well-baby nursery (WBN) and from a neonatal intensive care unit (NICU) and compared to the performance in newborns of a more standard protocol based on transient-evoked OAEs (TEOAEs). The DPOAE protocols used asymmetrical stimulus intensities (L(1) > L(2)) with a frequency ratio of 1.22, in the following format: (P1), L(1) = 60, L(2) = 50 dB SPL; (P2), L(1) = 65, L(2) = 55 dB SPL, and (P3), L(1) = 75, L(2) = 65 dB SPL. Linear TEOAE responses, evoked by click stimuli of 75 dB peSPL, were used as controls of normal cochlear function. Five frequencies at 1.5, 2.0, 3.0, 4.0 and 5.0 kHz were tested with a common commercially available macro-based software subroutine (Otodynamics Corp, ILO-92). The project evaluated the responses from 1200 WBN infants (average age 48 h) and 350 low-birth-weight NICU infants, all randomly selected. Statistical analyses comparing the signal-to-noise ratios (S/N), at the predefined f(2) frequencies, indicated that the P1 and P2 DPOAE protocols generated similar responses. Significant S/N differences were observed in the P3 to P2 dataset comparisons. DPOAE scoring criteria were estimated from the P3 dataset using a one-sided, distribution-free confidence intervals. The scoring criteria for a 'pass' were estimated as a minimum S/N of 6.0, 7.0 and 6.0 dB at 2.0, 3.0 and 4.0 kHz, respectively. In terms of feasibility, the P3 protocol generated responses in 98% of the WBN and 94.8% of the NICU infants. All three DPOAE protocols demonstrated shorter time-recording requirements than the standard TEOAE test. The false-positive and false-negative rates for the NICU infants were estimated as 0.0028 and 0.003%, respectively.

  3. Distributed Wireless Power Transfer With Energy Feedback

    NASA Astrophysics Data System (ADS)

    Lee, Seunghyun; Zhang, Rui

    2017-04-01

    Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.

  4. Analysis of power management and system latency in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Oswald, Matthew T.; Rohwer, Judd A.; Forman, Michael A.

    2004-08-01

    Successful power management in a wireless sensor network requires optimization of the protocols which affect energy-consumption on each node and the aggregate effects across the larger network. System optimization for a given deployment scenario requires an analysis and trade off of desired node and network features with their associated costs. The sleep protocol for an energy-efficient wireless sensor network for event detection, target classification, and target tracking developed at Sandia National Laboratories is presented. The dynamic source routing (DSR) algorithm is chosen to reduce network maintenance overhead, while providing a self-configuring and self-healing network architecture. A method for determining the optimal sleep time is developed and presented, providing reference data which spans several orders of magnitude. Message timing diagrams show, that a node in a five-node cluster, employing an optimal cyclic single-radio sleep protocol, consumes 3% more energy and incurs a 16-s increase latency than nodes employing the more complex dual-radio STEM protocol.

  5. A Systems Approach to Designing Effective Clinical Trials Using Simulations

    PubMed Central

    Fusaro, Vincent A.; Patil, Prasad; Chi, Chih-Lin; Contant, Charles F.; Tonellato, Peter J.

    2013-01-01

    Background Pharmacogenetics in warfarin clinical trials have failed to show a significant benefit compared to standard clinical therapy. This study demonstrates a computational framework to systematically evaluate pre-clinical trial design of target population, pharmacogenetic algorithms, and dosing protocols to optimize primary outcomes. Methods and Results We programmatically created an end-to-end framework that systematically evaluates warfarin clinical trial designs. The framework includes options to create a patient population, multiple dosing strategies including genetic-based and non-genetic clinical-based, multiple dose adjustment protocols, pharmacokinetic/pharmacodynamics (PK/PD) modeling and international normalization ratio (INR) prediction, as well as various types of outcome measures. We validated the framework by conducting 1,000 simulations of the CoumaGen clinical trial primary endpoints. The simulation predicted a mean time in therapeutic range (TTR) of 70.6% and 72.2% (P = 0.47) in the standard and pharmacogenetic arms, respectively. Then, we evaluated another dosing protocol under the same original conditions and found a significant difference in TTR between the pharmacogenetic and standard arm (78.8% vs. 73.8%; P = 0.0065), respectively. Conclusions We demonstrate that this simulation framework is useful in the pre-clinical assessment phase to study and evaluate design options and provide evidence to optimize the clinical trial for patient efficacy and reduced risk. PMID:23261867

  6. Continuous-variable quantum key distribution with a leakage from state preparation

    NASA Astrophysics Data System (ADS)

    Derkach, Ivan; Usenko, Vladyslav C.; Filip, Radim

    2017-12-01

    We address side-channel leakage in a trusted preparation station of continuous-variable quantum key distribution with coherent and squeezed states. We consider two different scenarios: multimode Gaussian modulation, directly accessible to an eavesdropper, or side-channel loss of the signal states prior to the modulation stage. We show the negative impact of excessive modulation on both the coherent- and squeezed-state protocols. The impact is more pronounced for squeezed-state protocols and may require optimization of squeezing in the case of noisy quantum channels. Further, we demonstrate that the coherent-state protocol is immune to side-channel signal state leakage prior to modulation, while the squeezed-state protocol is vulnerable to such attacks, becoming more sensitive to the noise in the channel. In the general case of noisy quantum channels the signal squeezing can be optimized to provide best performance of the protocol in the presence of side-channel leakage prior to modulation. Our results demonstrate that leakage from the trusted source in continuous-variable quantum key distribution should not be underestimated and squeezing optimization is needed to overcome coherent state protocols.

  7. Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass.

    PubMed

    Paris, Michael T; Lafleur, Benoit; Dubin, Joel A; Mourtzakis, Marina

    2017-10-01

    Ultrasound is a non-invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four-site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole-body reference methods. Our primary objectives were to (i) compare the four-site protocol's ability to predict appendicular lean tissue mass from dual-energy X-ray absorptiometry; (ii) optimize the predictability of the four-site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. This observational cross-sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole-body dual-energy X-ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine-site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four-site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. The four-site protocol was strongly associated (R 2  = 0.72) with appendicular lean tissue mass, but Bland-Altman analysis displayed wide limits of agreement (-5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four-site protocol, improved the association (R 2  = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (-3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). The four-site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass. This optimized protocol can accurately identify low lean tissue mass, while still being easily applied at the bedside. © 2017 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf of the Society on Sarcopenia, Cachexia and Wasting Disorders.

  8. Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass

    PubMed Central

    Paris, Michael T.; Lafleur, Benoit; Dubin, Joel A.

    2017-01-01

    Abstract Background Ultrasound is a non‐invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four‐site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole‐body reference methods. Our primary objectives were to (i) compare the four‐site protocol's ability to predict appendicular lean tissue mass from dual‐energy X‐ray absorptiometry; (ii) optimize the predictability of the four‐site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. Methods This observational cross‐sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole‐body dual‐energy X‐ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine‐site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four‐site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. Results The four‐site protocol was strongly associated (R 2 = 0.72) with appendicular lean tissue mass, but Bland–Altman analysis displayed wide limits of agreement (−5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four‐site protocol, improved the association (R 2 = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (−3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). Conclusions The four‐site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass. This optimized protocol can accurately identify low lean tissue mass, while still being easily applied at the bedside. PMID:28722298

  9. An Efficient Framework Model for Optimizing Routing Performance in VANETs

    PubMed Central

    Zulkarnain, Zuriati Ahmad; Subramaniam, Shamala

    2018-01-01

    Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED). PMID:29462884

  10. Pharmaceutical care for patients with COPD in Belgium and views on protocol implementation.

    PubMed

    Tommelein, Eline; Tollenaere, Kathleen; Mehuys, Els; Boussery, Koen

    2014-08-01

    A protocol-based pharmaceutical care program (the PHARMACOP-protocol) focusing on patient counselling during prescription filling has shown to be effective in patients with chronic obstructive pulmonary disease (COPD). However, implementation of this protocol in daily practice has not yet been studied. To describe current implementation level of the items included in the PHARMACOP-protocol in Belgian community pharmacies and to evaluate pharmacists' perspectives on the implementation of this protocol in daily practice. A cross-sectional study was conducted from April to June 2012, in randomly selected community pharmacies in Flanders. Pharmacists were questionned using structured interviews. 125 pharmacies were contacted and 80 managing pharmacists (64 %) participated. In >70 % of pharmacies, 4/7 protocol items for first prescriptions and 3/5 protocol items for follow-up prescriptions were already routinely implemented. For first and follow-up prescriptions, respectively 39 (49 %) and 34 pharmacists (43 %) stated they would need to spend at least 5 min extra to offer optimal patient counselling. Most mentioned barriers preventing protocol implementation included lack of time (80 %), no integration in pharmacy software (61 %) and too much administrative burden (58 %). Approximately 50 % of the PHARMACOP-protocol items are currently routinely provided in Belgian community pharmacies. Nearly all interviewed pharmacists are willing to implement the protocol fully or partially in daily practice.

  11. Template based protein structure modeling by global optimization in CASP11.

    PubMed

    Joo, Keehyoung; Joung, InSuk; Lee, Sun Young; Kim, Jong Yun; Cheng, Qianyi; Manavalan, Balachandran; Joung, Jong Young; Heo, Seungryong; Lee, Juyong; Nam, Mikyung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    For the template-based modeling (TBM) of CASP11 targets, we have developed three new protein modeling protocols (nns for server prediction and LEE and LEER for human prediction) by improving upon our previous CASP protocols (CASP7 through CASP10). We applied the powerful global optimization method of conformational space annealing to three stages of optimization, including multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain remodeling. For more successful fold recognition, a new alignment method called CRFalign was developed. It can incorporate sensitive positional and environmental dependence in alignment scores as well as strong nonlinear correlations among various features. Modifications and adjustments were made to the form of the energy function and weight parameters pertaining to the chain building procedure. For the side-chain remodeling step, residue-type dependence was introduced to the cutoff value that determines the entry of a rotamer to the side-chain modeling library. The improved performance of the nns server method is attributed to successful fold recognition achieved by combining several methods including CRFalign and to the current modeling formulation that can incorporate native-like structural aspects present in multiple templates. The LEE protocol is identical to the nns one except that CASP11-released server models are used as templates. The success of LEE in utilizing CASP11 server models indicates that proper template screening and template clustering assisted by appropriate cluster ranking promises a new direction to enhance protein 3D modeling. Proteins 2016; 84(Suppl 1):221-232. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. Optimized imaging of the midface and orbits

    PubMed Central

    Langner, Sönke

    2015-01-01

    A variety of imaging techniques are available for imaging the midface and orbits. This review article describes the different imaging techniques based on the recent literature and discusses their impact on clinical routine imaging. Imaging protocols are presented for different diseases and the different imaging modalities. PMID:26770279

  13. Strengths-based behavioral intervention for parents of adolescents with type 1 diabetes using an mHealth app (Type 1 Doing Well): Protocol for a pilot randomized controlled trial

    USDA-ARS?s Scientific Manuscript database

    Supportive parent involvement for adolescents' type 1 diabetes (T1D) self-management promotes optimal diabetes outcomes. However, family conflict is common and can interfere with collaborative family teamwork. Few interventions have used explicitly strengths-based approaches to help reinforce desire...

  14. Identification of novel drug scaffolds for inhibition of SARS-CoV 3-Chymotrypsin-like protease using virtual and high-throughput screenings.

    PubMed

    Lee, Hyun; Mittal, Anuradha; Patel, Kavankumar; Gatuz, Joseph L; Truong, Lena; Torres, Jaime; Mulhearn, Debbie C; Johnson, Michael E

    2014-01-01

    We have used a combination of virtual screening (VS) and high-throughput screening (HTS) techniques to identify novel, non-peptidic small molecule inhibitors against human SARS-CoV 3CLpro. A structure-based VS approach integrating docking and pharmacophore based methods was employed to computationally screen 621,000 compounds from the ZINC library. The screening protocol was validated using known 3CLpro inhibitors and was optimized for speed, improved selectivity, and for accommodating receptor flexibility. Subsequently, a fluorescence-based enzymatic HTS assay was developed and optimized to experimentally screen approximately 41,000 compounds from four structurally diverse libraries chosen mainly based on the VS results. False positives from initial HTS hits were eliminated by a secondary orthogonal binding analysis using surface plasmon resonance (SPR). The campaign identified a reversible small molecule inhibitor exhibiting mixed-type inhibition with a K(i) value of 11.1 μM. Together, these results validate our protocols as suitable approaches to screen virtual and chemical libraries, and the newly identified compound reported in our study represents a promising structural scaffold to pursue for further SARS-CoV 3CLpro inhibitor development. Copyright © 2013. Published by Elsevier Ltd.

  15. Model-Based Design of Tree WSNs for Decentralized Detection.

    PubMed

    Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam

    2015-08-20

    The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.

  16. YIP Formal Synthesis of Software-Based Control Protocols for Fractionated,Composable Autonomous Systems

    DTIC Science & Technology

    2016-07-08

    Systems Using Automata Theory and Barrier Certifi- cates We developed a sound but incomplete method for the computational verification of specifications...method merges ideas from automata -based model checking with those from control theory including so-called barrier certificates and optimization-based... Automata theory meets barrier certificates: Temporal logic verification of nonlinear systems,” IEEE Transactions on Automatic Control, 2015. [J2] R

  17. The Promoting Effective Advance Care for Elders (PEACE) randomized pilot study: theoretical framework and study design.

    PubMed

    Allen, Kyle R; Hazelett, Susan E; Radwany, Steven; Ertle, Denise; Fosnight, Susan M; Moore, Pamela S

    2012-04-01

    Practice guidelines are available for hospice and palliative medicine specialists and geriatricians. However, these guidelines do not adequately address the needs of patients who straddle the 2 specialties: homebound chronically ill patients. The purpose of this article is to describe the theoretical basis for the Promoting Effective Advance Care for Elders (PEACE) randomized pilot study. PEACE is an ongoing 2-group randomized pilot study (n=80) to test an in-home interdisciplinary care management intervention that combines palliative care approaches to symptom management, psychosocial and emotional support, and advance care planning with geriatric medicine approaches to optimizing function and addressing polypharmacy. The population comprises new enrollees into PASSPORT, Ohio's community-based, long-term care Medicaid waiver program. All PASSPORT enrollees have geriatric/palliative care crossover needs because they are nursing home eligible. The intervention is based on Wagner's Chronic Care Model and includes comprehensive interdisciplinary care management for these low-income frail elders with chronic illnesses, uses evidence-based protocols, emphasizes patient activation, and integrates with community-based long-term care and other community agencies. Our model, with its standardized, evidence-based medical and psychosocial intervention protocols, will transport easily to other sites that are interested in optimizing outcomes for community-based, chronically ill older adults. © Mary Ann Liebert, Inc.

  18. Task based rehabilitation protocol for elite athletes following Anterior Cruciate ligament reconstruction: a clinical commentary.

    PubMed

    Herrington, Lee; Myer, Gregory; Horsley, Ian

    2013-11-01

    Anterior Cruciate ligament (ACL) injuries are one of the most common and devastating knee injuries sustained whilst participating in sport. ACL reconstruction (ACLR) remains the standard approach for athletes who aim to return to high level sporting activities but the outcome from surgery is not assured. Secondary morbidities and an inability to return to the same competitive level are common following ACLR. One factor which might be linked to these sub-optimal outcomes may be a failure to have clearly defined performance criteria for return to activity and sport. This paper presents a commentary describing a structured return to sport rehabilitation protocol for athletes following ACLR. The protocol was developed from synthesis of the available literature and consensus of physiotherapists and strength and conditioning coaches based in the home country Institute of Sports within the United Kingdom. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. A Differential Evolution-Based Routing Algorithm for Environmental Monitoring Wireless Sensor Networks

    PubMed Central

    Li, Xiaofang; Xu, Lizhong; Wang, Huibin; Song, Jie; Yang, Simon X.

    2010-01-01

    The traditional Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol is a clustering-based protocol. The uneven selection of cluster heads results in premature death of cluster heads and premature blind nodes inside the clusters, thus reducing the overall lifetime of the network. With a full consideration of information on energy and distance distribution of neighboring nodes inside the clusters, this paper proposes a new routing algorithm based on differential evolution (DE) to improve the LEACH routing protocol. To meet the requirements of monitoring applications in outdoor environments such as the meteorological, hydrological and wetland ecological environments, the proposed algorithm uses the simple and fast search features of DE to optimize the multi-objective selection of cluster heads and prevent blind nodes for improved energy efficiency and system stability. Simulation results show that the proposed new LEACH routing algorithm has better performance, effectively extends the working lifetime of the system, and improves the quality of the wireless sensor networks. PMID:22219670

  20. Accounting for patient size in the optimization of dose and image quality of pelvis cone beam CT protocols on the Varian OBI system

    PubMed Central

    Moore, Craig S; Horsfield, Carl J; Saunderson, John R; Beavis, Andrew W

    2015-01-01

    Objective: The purpose of this study was to develop size-based radiotherapy kilovoltage cone beam CT (CBCT) protocols for the pelvis. Methods: Image noise was measured in an elliptical phantom of varying size for a range of exposure factors. Based on a previously defined “small pelvis” reference patient and CBCT protocol, appropriate exposure factors for small, medium, large and extra-large patients were derived which approximate the image noise behaviour observed on a Philips CT scanner (Philips Medical Systems, Best, Netherlands) with automatic exposure control (AEC). Selection criteria, based on maximum tube current–time product per rotation selected during the radiotherapy treatment planning scan, were derived based on an audit of patient size. Results: It has been demonstrated that 110 kVp yields acceptable image noise for reduced patient dose in pelvic CBCT scans of small, medium and large patients, when compared with manufacturer's default settings (125 kVp). Conversely, extra-large patients require increased exposure factors to give acceptable images. 57% of patients in the local population now receive much lower radiation doses, whereas 13% require higher doses (but now yield acceptable images). Conclusion: The implementation of size-based exposure protocols has significantly reduced radiation dose to the majority of patients with no negative impact on image quality. Increased doses are required on the largest patients to give adequate image quality. Advances in knowledge: The development of size-based CBCT protocols that use the planning CT scan (with AEC) to determine which protocol is appropriate ensures adequate image quality whilst minimizing patient radiation dose. PMID:26419892

  1. Evaluation and optimization of microbial DNA extraction from fecal samples of wild Antarctic bird species

    PubMed Central

    Eriksson, Per; Mourkas, Evangelos; González-Acuna, Daniel; Olsen, Björn; Ellström, Patrik

    2017-01-01

    ABSTRACT Introduction: Advances in the development of nucleic acid-based methods have dramatically facilitated studies of host–microbial interactions. Fecal DNA analysis can provide information about the host’s microbiota and gastrointestinal pathogen burden. Numerous studies have been conducted in mammals, yet birds are less well studied. Avian fecal DNA extraction has proved challenging, partly due to the mixture of fecal and urinary excretions and the deficiency of optimized protocols. This study presents an evaluation of the performance in avian fecal DNA extraction of six commercial kits from different bird species, focusing on penguins. Material and methods: Six DNA extraction kits were first tested according to the manufacturers’ instructions using mallard feces. The kit giving the highest DNA yield was selected for further optimization and evaluation using Antarctic bird feces. Results: Penguin feces constitute a challenging sample type: most of the DNA extraction kits failed to yield acceptable amounts of DNA. The QIAamp cador Pathogen kit (Qiagen) performed the best in the initial investigation. Further optimization of the protocol resulted in good yields of high-quality DNA from seven bird species of different avian orders. Conclusion: This study presents an optimized approach to DNA extraction from challenging avian fecal samples. PMID:29152162

  2. New Protocol Based on UHPLC-MS/MS for Quantitation of Metabolites in Xylose-Fermenting Yeasts

    NASA Astrophysics Data System (ADS)

    Campos, Christiane Gonçalves; Veras, Henrique César Teixeira; de Aquino Ribeiro, José Antônio; Costa, Patrícia Pinto Kalil Gonçalves; Araújo, Katiúscia Pereira; Rodrigues, Clenilson Martins; de Almeida, João Ricardo Moreira; Abdelnur, Patrícia Verardi

    2017-12-01

    Xylose fermentation is a bottleneck in second-generation ethanol production. As such, a comprehensive understanding of xylose metabolism in naturally xylose-fermenting yeasts is essential for prospection and construction of recombinant yeast strains. The objective of the current study was to establish a reliable metabolomics protocol for quantification of key metabolites of xylose catabolism pathways in yeast, and to apply this protocol to Spathaspora arborariae. Ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS) was used to quantify metabolites, and afterwards, sample preparation was optimized to examine yeast intracellular metabolites. S. arborariae was cultivated using xylose as a carbon source under aerobic and oxygen-limited conditions. Ion pair chromatography (IPC) and hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC-MS/MS) were shown to efficiently quantify 14 and 5 metabolites, respectively, in a more rapid chromatographic protocol than previously described. Thirteen and eleven metabolites were quantified in S. arborariae under aerobic and oxygen-limited conditions, respectively. This targeted metabolomics protocol is shown here to quantify a total of 19 metabolites, including sugars, phosphates, coenzymes, monosaccharides, and alcohols, from xylose catabolism pathways (glycolysis, pentose phosphate pathway, and tricarboxylic acid cycle) in yeast. Furthermore, to our knowledge, this is the first time that intracellular metabolites have been quantified in S. arborariae after xylose consumption. The results indicated that fine control of oxygen levels during fermentation is necessary to optimize ethanol production by S. arborariae. The protocol presented here may be applied to other yeast species and could support yeast genetic engineering to improve second generation ethanol production. [Figure not available: see fulltext.

  3. A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks

    PubMed Central

    Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang

    2017-01-01

    Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs. PMID:28786915

  4. VANET Clustering Based Routing Protocol Suitable for Deserts.

    PubMed

    Nasr, Mohammed Mohsen Mohammed; Abdelgader, Abdeldime Mohamed Salih; Wang, Zhi-Gong; Shen, Lian-Feng

    2016-04-06

    In recent years, there has emerged applications of vehicular ad hoc networks (VANETs) towards security, safety, rescue, exploration, military and communication redundancy systems in non-populated areas, besides its ordinary use in urban environments as an essential part of intelligent transportation systems (ITS). This paper proposes a novel algorithm for the process of organizing a cluster structure and cluster head election (CHE) suitable for VANETs. Moreover, it presents a robust clustering-based routing protocol, which is appropriate for deserts and can achieve high communication efficiency, ensuring reliable information delivery and optimal exploitation of the equipment on each vehicle. A comprehensive simulation is conducted to evaluate the performance of the proposed CHE and routing algorithms.

  5. VANET Clustering Based Routing Protocol Suitable for Deserts

    PubMed Central

    Mohammed Nasr, Mohammed Mohsen; Abdelgader, Abdeldime Mohamed Salih; Wang, Zhi-Gong; Shen, Lian-Feng

    2016-01-01

    In recent years, there has emerged applications of vehicular ad hoc networks (VANETs) towards security, safety, rescue, exploration, military and communication redundancy systems in non-populated areas, besides its ordinary use in urban environments as an essential part of intelligent transportation systems (ITS). This paper proposes a novel algorithm for the process of organizing a cluster structure and cluster head election (CHE) suitable for VANETs. Moreover, it presents a robust clustering-based routing protocol, which is appropriate for deserts and can achieve high communication efficiency, ensuring reliable information delivery and optimal exploitation of the equipment on each vehicle. A comprehensive simulation is conducted to evaluate the performance of the proposed CHE and routing algorithms. PMID:27058539

  6. Infinite horizon optimal impulsive control with applications to Internet congestion control

    NASA Astrophysics Data System (ADS)

    Avrachenkov, Konstantin; Habachi, Oussama; Piunovskiy, Alexey; Zhang, Yi

    2015-04-01

    We investigate infinite-horizon deterministic optimal control problems with both gradual and impulsive controls, where any finitely many impulses are allowed simultaneously. Both discounted and long-run time-average criteria are considered. We establish very general and at the same time natural conditions, under which the dynamic programming approach results in an optimal feedback policy. The established theoretical results are applied to the Internet congestion control, and by solving analytically and nontrivially the underlying optimal control problems, we obtain a simple threshold-based active queue management scheme, which takes into account the main parameters of the transmission control protocols, and improves the fairness among the connections in a given network.

  7. Optimization of ultrasound-assisted extraction of charantin from Momordica charantia fruits using response surface methodology.

    PubMed

    Ahamad, Javed; Amin, Saima; Mir, Showkat R

    2015-01-01

    Momordica charantia Linn. (Cucurbitaceae) fruits are well known for their beneficial effects in diabetes that are often attributed to its bioactive component charantin. The aim of the present study is to develop and optimize an efficient protocol for the extraction of charantin from M. charantia fruits. Response surface methodology (RSM) was used for the optimization of ultrasound-assisted extraction (UAE) conditions. RSM was based on a three-level, three-variable Box-Behnken design (BBD), and the studied variables included solid to solvent ratio, extraction temperature, and extraction time. The optimal conditions predicted by the BBD were: UAE with methanol: Water (80:20, v/v) at 46°C for 120 min with solid to solvent ratio of 1:26 w/v, under which the yield of charantin was 3.18 mg/g. Confirmation trials under slightly adjusted conditions yielded 3.12 ± 0.14 mg/g of charantin on dry weight basis of fruits. The result of UAE was also compared with Soxhlet extraction method and UAE was found 2.74-fold more efficient than the Soxhlet extraction for extracting charantin. A facile UAE protocol for a high extraction yield of charantin was developed and validated.

  8. Performance analysis of optimal power allocation in wireless cooperative communication systems

    NASA Astrophysics Data System (ADS)

    Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li

    2013-03-01

    Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.

  9. Hybrid optimal scheduling for intermittent androgen suppression of prostate cancer

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; di Bernardo, Mario; Bruchovsky, Nicholas; Aihara, Kazuyuki

    2010-12-01

    We propose a method for achieving an optimal protocol of intermittent androgen suppression for the treatment of prostate cancer. Since the model that reproduces the dynamical behavior of the surrogate tumor marker, prostate specific antigen, is piecewise linear, we can obtain an analytical solution for the model. Based on this, we derive conditions for either stopping or delaying recurrent disease. The solution also provides a design principle for the most favorable schedule of treatment that minimizes the rate of expansion of the malignant cell population.

  10. Two-qubit correlations revisited: average mutual information, relevant (and useful) observables and an application to remote state preparation

    NASA Astrophysics Data System (ADS)

    Giorda, Paolo; Allegra, Michele

    2017-07-01

    Understanding how correlations can be used for quantum communication protocols is a central goal of quantum information science. While many authors have linked the global measures of correlations such as entanglement or discord to the performance of specific protocols, in general the latter may require only correlations between specific observables. In this work, we first introduce a general measure of correlations for two-qubit states, based on the classical mutual information between local observables. Our measure depends on the state’s purity and the symmetry in the correlation distribution, according to which we provide a classification of maximally mixed marginal states (MMMS). We discuss the complementarity relation between correlations and coherence. By focusing on a simple yet paradigmatic example, i.e. the remote state preparation protocol, we introduce a method to systematically define the proper protocol-tailored measures of the correlations. The method is based on the identification of those correlations that are relevant (useful) for the protocol. On the one hand, the approach allows the role of the symmetry of the correlation distribution to be discussed in determining the efficiency of the protocol, both for MMMS and general two-qubit quantum states, and on the other hand, it allows an optimized protocol for non-MMMS to be devised, which is more efficient with respect to the standard one. Overall, our findings clarify how the key resources in simple communication protocols are the purity of the state used and the symmetry of the correlation distribution.

  11. Optimization of a Sample Processing Protocol for Recovery of ...

    EPA Pesticide Factsheets

    Journal Article Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps.

  12. DNA elution from buccal cells stored on Whatman FTA Classic Cards using a modified methanol fixation method.

    PubMed

    Johanson, Helene C; Hyland, Valentine; Wicking, Carol; Sturm, Richard A

    2009-04-01

    We describe here a method for DNA elution from buccal cells and whole blood both collected onto Whatman FTA technology, using methanol fixation followed by an elution PCR program. Extracted DNA is comparable in quality to published Whatman FTA protocols, as judged by PCR-based genotyping. Elution of DNA from the dried sample is a known rate-limiting step in the published Whatman FTA protocol; this method enables the use of each 3-mm punch of sample for several PCR reactions instead of the standard, one PCR reaction per sample punch. This optimized protocol therefore extends the usefulness and cost effectiveness of each buccal swab sample collected, when used for nucleic acid PCR and genotyping.

  13. Extended analysis of the Trojan-horse attack in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Vinay, Scott E.; Kok, Pieter

    2018-04-01

    The discrete-variable quantum key distribution protocols based on the 1984 protocol of Bennett and Brassard (BB84) are known to be secure against an eavesdropper, Eve, intercepting the flying qubits and performing any quantum operation on them. However, these protocols may still be vulnerable to side-channel attacks. We investigate the Trojan-horse side-channel attack where Eve sends her own state into Alice's apparatus and measures the reflected state to estimate the key. We prove that the separable coherent state is optimal for Eve among the class of multimode Gaussian attack states, even in the presence of thermal noise. We then provide a bound on the secret key rate in the case where Eve may use any separable state.

  14. Induction of complex immune responses and strong protection against retrovirus challenge by adenovirus-based immunization depends on the order of vaccine delivery.

    PubMed

    Kaulfuß, Meike; Wensing, Ina; Windmann, Sonja; Hrycak, Camilla Patrizia; Bayer, Wibke

    2017-02-06

    In the Friend retrovirus mouse model we developed potent adenovirus-based vaccines that were designed to induce either strong Friend virus GagL 85-93 -specific CD8 + T cell or antibody responses, respectively. To optimize the immunization outcome we evaluated vaccination strategies using combinations of these vaccines. While the vaccines on their own confer strong protection from a subsequent Friend virus challenge, the simple combination of the vaccines for the establishment of an optimized immunization protocol did not result in a further improvement of vaccine effectivity. We demonstrate that the co-immunization with GagL 85-93 /leader-gag encoding vectors together with envelope-encoding vectors abrogates the induction of GagL 85-93 -specific CD8 + T cells, and in successive immunization protocols the immunization with the GagL 85-93 /leader-gag encoding vector had to precede the immunization with an envelope encoding vector for the efficient induction of GagL 85-93 -specific CD8 + T cells. Importantly, the antibody response to envelope was in fact enhanced when the mice were adenovirus-experienced from a prior immunization, highlighting the expedience of this approach. To circumvent the immunosuppressive effect of envelope on immune responses to simultaneously or subsequently administered immunogens, we developed a two immunizations-based vaccination protocol that induces strong immune responses and confers robust protection of highly Friend virus-susceptible mice from a lethal Friend virus challenge.

  15. Evaluation and Adaptation of a Laboratory-Based cDNA Library Preparation Protocol for Retrospective Sequencing of Archived MicroRNAs from up to 35-Year-Old Clinical FFPE Specimens

    PubMed Central

    Loudig, Olivier; Wang, Tao; Ye, Kenny; Lin, Juan; Wang, Yihong; Ramnauth, Andrew; Liu, Christina; Stark, Azadeh; Chitale, Dhananjay; Greenlee, Robert; Multerer, Deborah; Honda, Stacey; Daida, Yihe; Spencer Feigelson, Heather; Glass, Andrew; Couch, Fergus J.; Rohan, Thomas; Ben-Dov, Iddo Z.

    2017-01-01

    Formalin-fixed paraffin-embedded (FFPE) specimens, when used in conjunction with patient clinical data history, represent an invaluable resource for molecular studies of cancer. Even though nucleic acids extracted from archived FFPE tissues are degraded, their molecular analysis has become possible. In this study, we optimized a laboratory-based next-generation sequencing barcoded cDNA library preparation protocol for analysis of small RNAs recovered from archived FFPE tissues. Using matched fresh and FFPE specimens, we evaluated the robustness and reproducibility of our optimized approach, as well as its applicability to archived clinical specimens stored for up to 35 years. We then evaluated this cDNA library preparation protocol by performing a miRNA expression analysis of archived breast ductal carcinoma in situ (DCIS) specimens, selected for their relation to the risk of subsequent breast cancer development and obtained from six different institutions. Our analyses identified six miRNAs (miR-29a, miR-221, miR-375, miR-184, miR-363, miR-455-5p) differentially expressed between DCIS lesions from women who subsequently developed an invasive breast cancer (cases) and women who did not develop invasive breast cancer within the same time interval (control). Our thorough evaluation and application of this laboratory-based miRNA sequencing analysis indicates that the preparation of small RNA cDNA libraries can reliably be performed on older, archived, clinically-classified specimens. PMID:28335433

  16. Preoperative magnetic resonance imaging protocol for endoscopic cranial base image-guided surgery.

    PubMed

    Grindle, Christopher R; Curry, Joseph M; Kang, Melissa D; Evans, James J; Rosen, Marc R

    2011-01-01

    Despite the increasing utilization of image-guided surgery, no radiology protocols for obtaining magnetic resonance (MR) imaging of adequate quality are available in the current literature. At our institution, more than 300 endonasal cranial base procedures including pituitary, extended pituitary, and other anterior skullbase procedures have been performed in the past 3 years. To facilitate and optimize preoperative evaluation and assessment, there was a need to develop a magnetic resonance protocol. Retrospective Technical Assessment was performed. Through a collaborative effort between the otolaryngology, neurosurgery, and neuroradiology departments at our institution, a skull base MR image-guided (IGS) protocol was developed with several ends in mind. First, it was necessary to generate diagnostic images useful for the more frequently seen pathologies to improve work flow and limit the expense and inefficiency of case specific MR studies. Second, it was necessary to generate sequences useful for IGS, preferably using sequences that best highlight that lesion. Currently, at our institution, all MR images used for IGS are obtained using this protocol as part of preoperative planning. The protocol that has been developed allows for thin cut precontrast and postcontrast axial cuts that can be used to plan intraoperative image guidance. It also obtains a thin cut T2 axial series that can be compiled separately for intraoperative imaging, or may be fused with computed tomographic images for combined modality. The outlined protocol obtains image sequences effective for diagnostic and operative purposes for image-guided surgery using both T1 and T2 sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Magnetic nanobeads present during enzymatic amplification and labeling for a simplified DNA detection protocol based on AC susceptometry

    NASA Astrophysics Data System (ADS)

    Bejhed, Rebecca S.; Strømme, Maria; Svedlindh, Peter; Ahlford, Annika; Strömberg, Mattias

    2015-12-01

    Magnetic biosensors are promising candidates for low-cost point-of-care biodiagnostic devices. For optimal efficiency it is crucial to minimize the time and complexity of the assay protocol including target recognition, amplification, labeling and read-out. In this work, possibilities for protocol simplifications for a DNA biodetection principle relying on hybridization of magnetic nanobeads to rolling circle amplification (RCA) products are investigated. The target DNA is recognized through a padlock ligation assay resulting in DNA circles serving as templates for the RCA process. It is found that beads can be present during amplification without noticeably interfering with the enzyme used for RCA (phi29 polymerase). As a result, the bead-coil hybridization can be performed immediately after amplification in a one-step manner at elevated temperature within a few minutes prior to read-out in an AC susceptometer setup, i.e. a combined protocol approach. Moreover, by recording the phase angle ξ = arctan(χ″/χ'), where χ and χ″ are the in-phase and out-of-phase components of the AC susceptibility, respectively, at one single frequency the total assay time for the optimized combined protocol would be no more than 1.5 hours, often a relevant time frame for diagnosis of cancer and infectious disease. Also, applying the phase angle method normalization of AC susceptibility data is not needed. These findings are useful for the development of point-of-care biodiagnostic devices relying on bead-coil binding and magnetic AC susceptometry.

  18. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  19. Practical decoy state for quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Xiongfeng; Qi Bing; Zhao Yi

    2005-07-15

    Decoy states have recently been proposed as a useful method for substantially improving the performance of quantum key distribution (QKD). Here, we present a general theory of the decoy state protocol based on only two decoy states and one signal state. We perform optimization on the choice of intensities of the two decoy states and the signal state. Our result shows that a decoy state protocol with only two types of decoy states - the vacuum and a weak decoy state - asymptotically approaches the theoretical limit of the most general type of decoy state protocol (with an infinite numbermore » of decoy states). We also present a one-decoy-state protocol. Moreover, we provide estimations on the effects of statistical fluctuations and suggest that, even for long-distance (larger than 100 km) QKD, our two-decoy-state protocol can be implemented with only a few hours of experimental data. In conclusion, decoy state quantum key distribution is highly practical.« less

  20. High-throughput deterministic single-cell encapsulation and droplet pairing, fusion, and shrinkage in a single microfluidic device.

    PubMed

    Schoeman, Rogier M; Kemna, Evelien W M; Wolbers, Floor; van den Berg, Albert

    2014-02-01

    In this article, we present a microfluidic device capable of successive high-yield single-cell encapsulation in droplets, with additional droplet pairing, fusion, and shrinkage. Deterministic single-cell encapsulation is realized using Dean-coupled inertial ordering of cells in a Yin-Yang-shaped curved microchannel using a double T-junction, with a frequency over 2000 Hz, followed by controlled droplet pairing with a 100% success rate. Subsequently, droplet fusion is realized using electrical actuation resulting in electro-coalescence of two droplets, each containing a single HL60 cell, with 95% efficiency. Finally, volume reduction of the fused droplet up to 75% is achieved by a triple pitchfork structure. This droplet volume reduction is necessary to obtain close cell-cell membrane contact necessary for final cell electrofusion, leading to hybridoma formation, which is the ultimate aim of this research. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Model-Based Design of Tree WSNs for Decentralized Detection †

    PubMed Central

    Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam

    2015-01-01

    The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989

  2. Data-centric multiobjective QoS-aware routing protocol for body sensor networks.

    PubMed

    Razzaque, Md Abdur; Hong, Choong Seon; Lee, Sungwon

    2011-01-01

    In this paper, we address Quality-of-Service (QoS)-aware routing issue for Body Sensor Networks (BSNs) in delay and reliability domains. We propose a data-centric multiobjective QoS-Aware routing protocol, called DMQoS, which facilitates the system to achieve customized QoS services for each traffic category differentiated according to the generated data types. It uses modular design architecture wherein different units operate in coordination to provide multiple QoS services. Their operation exploits geographic locations and QoS performance of the neighbor nodes and implements a localized hop-by-hop routing. Moreover, the protocol ensures (almost) a homogeneous energy dissipation rate for all routing nodes in the network through a multiobjective Lexicographic Optimization-based geographic forwarding. We have performed extensive simulations of the proposed protocol, and the results show that DMQoS has significant performance improvements over several state-of-the-art approaches.

  3. Hi-C 2.0: An optimized Hi-C procedure for high-resolution genome-wide mapping of chromosome conformation.

    PubMed

    Belaghzal, Houda; Dekker, Job; Gibcus, Johan H

    2017-07-01

    Chromosome conformation capture-based methods such as Hi-C have become mainstream techniques for the study of the 3D organization of genomes. These methods convert chromatin interactions reflecting topological chromatin structures into digital information (counts of pair-wise interactions). Here, we describe an updated protocol for Hi-C (Hi-C 2.0) that integrates recent improvements into a single protocol for efficient and high-resolution capture of chromatin interactions. This protocol combines chromatin digestion and frequently cutting enzymes to obtain kilobase (kb) resolution. It also includes steps to reduce random ligation and the generation of uninformative molecules, such as unligated ends, to improve the amount of valid intra-chromosomal read pairs. This protocol allows for obtaining information on conformational structures such as compartment and topologically associating domains, as well as high-resolution conformational features such as DNA loops. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A More Efficient Contextuality Distillation Protocol

    NASA Astrophysics Data System (ADS)

    Meng, Hui-xian; Cao, Huai-xin; Wang, Wen-hua; Fan, Ya-jing; Chen, Liang

    2018-03-01

    Based on the fact that both nonlocality and contextuality are resource theories, it is natural to ask how to amplify them more efficiently. In this paper, we present a contextuality distillation protocol which produces an n-cycle box B ∗ B ' from two given n-cycle boxes B and B '. It works efficiently for a class of contextual n-cycle ( n ≥ 4) boxes which we termed as "the generalized correlated contextual n-cycle boxes". For any two generalized correlated contextual n-cycle boxes B and B ', B ∗ B ' is more contextual than both B and B '. Moreover, they can be distilled toward to the maximally contextual box C H n as the times of iteration goes to infinity. Among the known protocols, our protocol has the strongest approximate ability and is optimal in terms of its distillation rate. What is worth noting is that our protocol can witness a larger set of nonlocal boxes that make communication complexity trivial than the protocol in Brunner and Skrzypczyk (Phys. Rev. Lett. 102, 160403 2009), this might be helpful for exploring the problem that why quantum nonlocality is limited.

  5. A More Efficient Contextuality Distillation Protocol

    NASA Astrophysics Data System (ADS)

    Meng, Hui-xian; Cao, Huai-xin; Wang, Wen-hua; Fan, Ya-jing; Chen, Liang

    2017-12-01

    Based on the fact that both nonlocality and contextuality are resource theories, it is natural to ask how to amplify them more efficiently. In this paper, we present a contextuality distillation protocol which produces an n-cycle box B ∗ B ' from two given n-cycle boxes B and B '. It works efficiently for a class of contextual n-cycle (n ≥ 4) boxes which we termed as "the generalized correlated contextual n-cycle boxes". For any two generalized correlated contextual n-cycle boxes B and B ', B ∗ B ' is more contextual than both B and B '. Moreover, they can be distilled toward to the maximally contextual box C H n as the times of iteration goes to infinity. Among the known protocols, our protocol has the strongest approximate ability and is optimal in terms of its distillation rate. What is worth noting is that our protocol can witness a larger set of nonlocal boxes that make communication complexity trivial than the protocol in Brunner and Skrzypczyk (Phys. Rev. Lett. 102, 160403 2009), this might be helpful for exploring the problem that why quantum nonlocality is limited.

  6. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs.

    PubMed

    Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan

    2017-12-13

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols.

  7. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs

    PubMed Central

    Hu, Xiaopeng; Wang, Fan

    2017-01-01

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols. PMID:29236031

  8. Synthesis of Platinum-nickel Nanowires and Optimization for Oxygen Reduction Performance.

    PubMed

    Alia, Shaun M; Pivovar, Bryan S

    2018-04-27

    Platinum-nickel (Pt-Ni) nanowires were developed as fuel cell electrocatalysts, and were optimized for the performance and durability in the oxygen reduction reaction. Spontaneous galvanic displacement was used to deposit Pt layers onto Ni nanowire substrates. The synthesis approach produced catalysts with high specific activities and high Pt surface areas. Hydrogen annealing improved Pt and Ni mixing and specific activity. Acid leaching was used to preferentially remove Ni near the nanowire surface, and oxygen annealing was used to stabilize near-surface Ni, improving durability and minimizing Ni dissolution. These protocols detail the optimization of each post-synthesis processing step, including hydrogen annealing to 250 °C, exposure to 0.1 M nitric acid, and oxygen annealing to 175 °C. Through these steps, Pt-Ni nanowires produced increased activities more than an order of magnitude than Pt nanoparticles, while offering significant durability improvements. The presented protocols are based on Pt-Ni systems in the development of fuel cell catalysts. These techniques have also been used for a variety of metal combinations, and can be applied to develop catalysts for a number of electrochemical processes.

  9. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl; Tijssen, Rob H.N.; Senneville, Baudouin D. de

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was foundmore » to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.« less

  10. Sigma Routing Metric for RPL Protocol.

    PubMed

    Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian

    2018-04-21

    This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.

  11. On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach

    NASA Astrophysics Data System (ADS)

    Liu, Zheng; Xue, Kaiping; Hong, Peilin

    The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.

  12. Adjacency Matrix-Based Transmit Power Allocation Strategies in Wireless Sensor Networks

    PubMed Central

    Consolini, Luca; Medagliani, Paolo; Ferrari, Gianluigi

    2009-01-01

    In this paper, we present an innovative transmit power control scheme, based on optimization theory, for wireless sensor networks (WSNs) which use carrier sense multiple access (CSMA) with collision avoidance (CA) as medium access control (MAC) protocol. In particular, we focus on schemes where several remote nodes send data directly to a common access point (AP). Under the assumption of finite overall network transmit power and low traffic load, we derive the optimal transmit power allocation strategy that minimizes the packet error rate (PER) at the AP. This approach is based on modeling the CSMA/CA MAC protocol through a finite state machine and takes into account the network adjacency matrix, depending on the transmit power distribution and determining the network connectivity. It will be then shown that the transmit power allocation problem reduces to a convex constrained minimization problem. Our results show that, under the assumption of low traffic load, the power allocation strategy, which guarantees minimal delay, requires the maximization of network connectivity, which can be equivalently interpreted as the maximization of the number of non-zero entries of the adjacency matrix. The obtained theoretical results are confirmed by simulations for unslotted Zigbee WSNs. PMID:22346705

  13. Sigma Routing Metric for RPL Protocol

    PubMed Central

    Rojas, Aldo; Fernandez, Luis

    2018-01-01

    This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524

  14. Near-optimal protocols in complex nonequilibrium transformations

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...

    2016-08-29

    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less

  15. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    PubMed

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  16. European Consensus Guidelines on the Management of Respiratory Distress Syndrome - 2016 Update.

    PubMed

    Sweet, David G; Carnielli, Virgilio; Greisen, Gorm; Hallman, Mikko; Ozek, Eren; Plavka, Richard; Saugstad, Ola Didrik; Simeoni, Umberto; Speer, Christian P; Vento, Máximo; Visser, Gerard H A; Halliday, Henry L

    2017-01-01

    Advances in the management of respiratory distress syndrome (RDS) ensure that clinicians must continue to revise current practice. We report the third update of the European Guidelines for the Management of RDS by a European panel of expert neonatologists including input from an expert perinatal obstetrician based on available literature up to the beginning of 2016. Optimizing the outcome for babies with RDS includes consideration of when to use antenatal steroids, and good obstetric practice includes methods of predicting the risk of preterm delivery and also consideration of whether transfer to a perinatal centre is necessary and safe. Methods for optimal delivery room management have become more evidence based, and protocols for lung protection, including initiation of continuous positive airway pressure and titration of oxygen, should be implemented from soon after birth. Surfactant replacement therapy is a crucial part of the management of RDS, and newer protocols for surfactant administration are aimed at avoiding exposure to mechanical ventilation, and there is more evidence of differences among various surfactants in clinical use. Newer methods of maintaining babies on non-invasive respiratory support have been developed and offer potential for greater comfort and less chronic lung disease. As technology for delivering mechanical ventilation improves, the risk of causing lung injury should decrease although minimizing the time spent on mechanical ventilation using caffeine and if necessary postnatal steroids are also important considerations. Protocols for optimizing the general care of infants with RDS are also essential with good temperature control, careful fluid and nutritional management, maintenance of perfusion and judicious use of antibiotics all being important determinants of best outcome. © 2016 S. Karger AG, Basel.

  17. A Qualitative Analysis of an Advanced Practice Nurse-Directed Transitional Care Model Intervention

    ERIC Educational Resources Information Center

    Bradway, Christine; Trotta, Rebecca; Bixby, M. Brian; McPartland, Ellen; Wollman, M. Catherine; Kapustka, Heidi; McCauley, Kathleen; Naylor, Mary D.

    2012-01-01

    Purpose: The purpose of this study was to describe barriers and facilitators to implementing a transitional care intervention for cognitively impaired older adults and their caregivers lead by advanced practice nurses (APNs). Design and Methods: APNs implemented an evidence-based protocol to optimize transitions from hospital to home. An…

  18. A multipath routing protocol based on clustering and ant colony optimization for wireless sensor networks.

    PubMed

    Yang, Jing; Xu, Mai; Zhao, Wei; Xu, Baoguo

    2010-01-01

    For monitoring burst events in a kind of reactive wireless sensor networks (WSNs), a multipath routing protocol (MRP) based on dynamic clustering and ant colony optimization (ACO) is proposed. Such an approach can maximize the network lifetime and reduce the energy consumption. An important attribute of WSNs is their limited power supply, and therefore some metrics (such as energy consumption of communication among nodes, residual energy, path length) were considered as very important criteria while designing routing in the MRP. Firstly, a cluster head (CH) is selected among nodes located in the event area according to some parameters, such as residual energy. Secondly, an improved ACO algorithm is applied in the search for multiple paths between the CH and sink node. Finally, the CH dynamically chooses a route to transmit data with a probability that depends on many path metrics, such as energy consumption. The simulation results show that MRP can prolong the network lifetime, as well as balance of energy consumption among nodes and reduce the average energy consumption effectively.

  19. Use of NTRIP for optimizing the decoding algorithm for real-time data streams.

    PubMed

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-10-10

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system(BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.

  20. Application of EARL (ResEARch 4 Life®) protocols for [18F]FDG-PET/CT clinical and research studies. A roadmap towards exact recovery coefficient

    NASA Astrophysics Data System (ADS)

    Balcerzyk, Marcin; Fernández-López, Rosa; Parrado-Gallego, Ángel; Pachón-Garrudo, Víctor Manuel; Chavero-Royan, José; Hevilla, Juan; Jiménez-Ortega, Elisa; Leal, Antonio

    2017-11-01

    Tumour uptake value is a critical result in [18F]FDG-PET/CT ([18F]fluorodeoxyglucose) quantitative scans such as the dose prescription for radiotherapy and oncology. The quantification is highly dependent on the protocol of acquisition and reconstruction of the image, especially in low activity tumours. During adjusting acquisition and reconstruction protocols available in our Siemens Biograph mCT scanner for EARL (ResEARch 4 Life®) [18F]FDG-PET/CT accreditation requirements, we developed reconstruction protocols which will be used in PET based radiotherapy planning able to reduce inter-/intra-institute variability in Standard Uptake Value (SUV) results, and to bring Recovery Coefficient to 1 as close as possible for Image Quality NEMA 2007 phantom. Primary and secondary tumours from two patients were assessed by four independent evaluators. The influence of reconstruction protocols on tumour clinical assessment was presented. We proposed the improvement route for EARL accredited protocols so that they may be developed in classes to take advantage of scanner possibilities. The application of optimized reconstruction protocol eliminates the need of partial volume corrections.

  1. Energy Consumption Research of Mobile Data Collection Protocol for Underwater Nodes Using an USV.

    PubMed

    Lv, Zhichao; Zhang, Jie; Jin, Jiucai; Li, Qi; Gao, Baoru

    2018-04-16

    The Unmanned Surface Vehicle (USV) integrated with an acoustic modem is a novel mobile vehicle for data collection, which has an advantage in terms of mobility, efficiency, and collection cost. In the scenario of data collection, the USV is controlled autonomously along the planning trajectory and the data of underwater nodes are dynamically collected. In order to improve the efficiency of data collection and extend the life of the underwater nodes, a mobile data collection protocol for underwater nodes using the USV was proposed. In the protocol, the stop-and-wait ARQ transmission mechanism is adopted, where the duty cycle is designed considering the ratio between the sleep mode and the detection mode, and the transmission ratio is defined by the duty cycle, wake-up signal cycles, and USV’s speed. According to protocol, the evaluation index for energy consumption is constructed based on the duty cycle and the transmission ratio. The energy consumption of the protocol is simulated and analyzed using the mobile communication experiment data of USV, taking into consideration USV’s speed, data sequence length, and duty cycle. Optimized protocol parameters are identified, which in turn denotes the proposed protocol’s feasibility and effectiveness.

  2. A General Small-Scale Reactor To Enable Standardization and Acceleration of Photocatalytic Reactions.

    PubMed

    Le, Chi Chip; Wismer, Michael K; Shi, Zhi-Cai; Zhang, Rui; Conway, Donald V; Li, Guoqing; Vachal, Petr; Davies, Ian W; MacMillan, David W C

    2017-06-28

    Photocatalysis for organic synthesis has experienced an exponential growth in the past 10 years. However, the variety of experimental procedures that have been reported to perform photon-based catalyst excitation has hampered the establishment of general protocols to convert visible light into chemical energy. To address this issue, we have designed an integrated photoreactor for enhanced photon capture and catalyst excitation. Moreover, the evaluation of this new reactor in eight photocatalytic transformations that are widely employed in medicinal chemistry settings has confirmed significant performance advantages of this optimized design while enabling a standardized protocol.

  3. The deployment of routing protocols in distributed control plane of SDN.

    PubMed

    Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu

    2014-01-01

    Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies.

  4. Deterministic and unambiguous dense coding

    NASA Astrophysics Data System (ADS)

    Wu, Shengjun; Cohen, Scott M.; Sun, Yuqing; Griffiths, Robert B.

    2006-04-01

    Optimal dense coding using a partially-entangled pure state of Schmidt rank Dmacr and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most Ld messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τx ) Bob knows for sure that Alice sent message x , and when it fails (probability 1-τx ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For Dmacr ⩽D a bound is obtained for Ld in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes [Phys. Rev. A71, 012311 (2005)]. For Dmacr >D it is shown that Ld is strictly less than D2 unless Dmacr is an integer multiple of D , in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for Dmacr ⩽D , assuming τx>0 for a set of Dmacr D messages, and a bound is obtained for the average ⟨1/τ⟩ . A bound on the average ⟨τ⟩ requires an additional assumption of encoding by isometries (unitaries when Dmacr =D ) that are orthogonal for different messages. Both bounds are saturated when τx is a constant independent of x , by a protocol based on one-shot entanglement concentration. For Dmacr >D it is shown that (at least) D2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.

  5. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  6. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  7. Osteogenic differentiation of equine adipose tissue derived mesenchymal stem cells using CaCl2.

    PubMed

    Elashry, Mohamed I; Baulig, Nadine; Heimann, Manuela; Bernhardt, Caroline; Wenisch, Sabine; Arnhold, Stefan

    2018-04-01

    Adipose tissue derived mesenchymal stem cells (ASCs) may be used to cure bone defects after osteogenic differentiation. In this study we tried to optimize osteogenic differentiation for equine ASCs using various concentrations of CaCl 2 in comparison to the standard osteogenic protocol. ASCs were isolated from subcutaneous adipose tissue from mixed breed horses. The osteogenic induction protocols were (1) the standard osteogenic medium (OM) composed of dexamethasone, ascorbic acid and β-glycerol phosphate; (2) CaCl 2 based protocol composed of 3, 5 and 7.5mM CaCl 2 . Differentiation and proliferation were evaluated at 7, 10, 14 and 21days post-differentiation induction using the alizarin red staining (ARS) detecting matrix calcification. Semi-quantification of cell protein content, ARS and alkaline phosphatase activity (ALP) were performed using an ELISA reader. Quantification of the transcription level for the common osteogenic markers alkaline phosphatase (ALP) and Osteopontin (OP) was performed using RT-qPCR. In the presence of CaCl 2 , a concentration dependent effect on the osteogenic differentiation capacity was evident by the ARS evaluation and OP gene expression. We provide evidence that 5 and 7mM CaCl 2 enhance the osteogenic differentiation compared to the OM protocol. Although, there was a clear commitment of ASCs to the osteogenic fate in the presence of 5 and 7mM CaCl 2 , cell proliferation was increased compared to OM. We report that an optimized CaCl 2 protocol reliably influences ASCs osteogenesis while conserving the proliferation capacity. Thus, using these protocols provide a platform for using ASCs as a cell source in bone tissue engineering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Consensus for multi-agent systems with time-varying input delays

    NASA Astrophysics Data System (ADS)

    Yuan, Chengzhi; Wu, Fen

    2017-10-01

    This paper addresses the consensus control problem for linear multi-agent systems subject to uniform time-varying input delays and external disturbance. A novel state-feedback consensus protocol is proposed under the integral quadratic constraint (IQC) framework, which utilises not only the relative state information from neighbouring agents but also the real-time information of delays by means of the dynamic IQC system states for feedback control. Based on this new consensus protocol, the associated IQC-based control synthesis conditions are established and fully characterised as linear matrix inequalities (LMIs), such that the consensus control solution with optimal ? disturbance attenuation performance can be synthesised efficiently via convex optimisation. A numerical example is used to demonstrate the proposed approach.

  9. Cross-layer cluster-based energy-efficient protocol for wireless sensor networks.

    PubMed

    Mammu, Aboobeker Sidhik Koyamparambil; Hernandez-Jayo, Unai; Sainz, Nekane; de la Iglesia, Idoia

    2015-04-09

    Recent developments in electronics and wireless communications have enabled the improvement of low-power and low-cost wireless sensors networks (WSNs). One of the most important challenges in WSNs is to increase the network lifetime due to the limited energy capacity of the network nodes. Another major challenge in WSNs is the hot spots that emerge as locations under heavy traffic load. Nodes in such areas quickly drain energy resources, leading to disconnection in network services. In such an environment, cross-layer cluster-based energy-efficient algorithms (CCBE) can prolong the network lifetime and energy efficiency. CCBE is based on clustering the nodes to different hexagonal structures. A hexagonal cluster consists of cluster members (CMs) and a cluster head (CH). The CHs are selected from the CMs based on nodes near the optimal CH distance and the residual energy of the nodes. Additionally, the optimal CH distance that links to optimal energy consumption is derived. To balance the energy consumption and the traffic load in the network, the CHs are rotated among all CMs. In WSNs, energy is mostly consumed during transmission and reception. Transmission collisions can further decrease the energy efficiency. These collisions can be avoided by using a contention-free protocol during the transmission period. Additionally, the CH allocates slots to the CMs based on their residual energy to increase sleep time. Furthermore, the energy consumption of CH can be further reduced by data aggregation. In this paper, we propose a data aggregation level based on the residual energy of CH and a cost-aware decision scheme for the fusion of data. Performance results show that the CCBE scheme performs better in terms of network lifetime, energy consumption and throughput compared to low-energy adaptive clustering hierarchy (LEACH) and hybrid energy-efficient distributed clustering (HEED).

  10. Somatic hybrid plants of Nicotiana × sanderae (+) N. debneyi with fungal resistance to Peronospora tabacina

    PubMed Central

    Patel, Deval; Power, J. Brian; Anthony, Paul; Badakshi, Farah; (Pat) Heslop-Harrison, J. S.; Davey, Michael R.

    2011-01-01

    Background and Aims The genus Nicotiana includes diploid and tetraploid species, with complementary ecological, agronomic and commercial characteristics. The species are of economic value for tobacco, as ornamentals, and for secondary plant-product biosynthesis. They show substantial differences in disease resistance because of their range of secondary products. In the last decade, sexual hybridization and transgenic technologies have tended to eclipse protoplast fusion for gene transfer. Somatic hybridization was exploited in the present investigation to generate a new hybrid combination involving two sexually incompatible tetraploid species. The somatic hybrid plants were characterized using molecular, molecular cytogenetic and phenotypic approaches. Methods Mesophyll protoplasts of the wild fungus-resistant species N. debneyi (2n = 4x = 48) were electrofused with those of the ornamental interspecific sexual hybrid N. × sanderae (2n = 2x = 18). From 1570 protoplast-derived cell colonies selected manually in five experiments, 580 tissues were sub-cultured to shoot regeneration medium. Regenerated plants were transferred to the glasshouse and screened for their morphology, chromosomal composition and disease resistance. Key Results Eighty-nine regenerated plants flowered; five were confirmed as somatic hybrids by their intermediate morphology compared with parental plants, cytological constitution and DNA-marker analysis. Somatic hybrid plants had chromosome complements of 60 or 62. Chromosomes were identified to parental genomes by genomic in situ hybridization and included all 18 chromosomes from N. × sanderae, and 42 or 44 chromosomes from N. debneyi. Four or six chromosomes of one ancestral genome of N. debneyi were eliminated during culture of electrofusion-treated protoplasts and plant regeneration. Both chloroplasts and mitochondria of the somatic hybrid plants were probably derived from N. debneyi. All somatic hybrid plants were fertile. In contrast to parental plants of N. × sanderae, the seed progeny of somatic hybrid plants were resistant to infection by Peronospora tabacina, a trait introgressed from the wild parent, N. debneyi. Conclusions Sexual incompatibility between N. × sanderae and N. debneyi was circumvented by somatic hybridization involving protoplast fusion. Asymmetrical nuclear hybridity was seen in the hybrids with loss of chromosomes, although importantly, somatic hybrids were fertile and stable. Expression of fungal resistance makes these somatic hybrids extremely valuable germplasm in future breeding programmes in ornamental tobacco. PMID:21880657

  11. Inhibition of phenylpropanoid biosynthesis increases cell wall digestibility, protoplast isolation, and facilitates sustained cell division in American elm (Ulmus americana).

    PubMed

    Jones, A Maxwell P; Chattopadhyay, Abhishek; Shukla, Mukund; Zoń, Jerzy; Saxena, Praveen K

    2012-05-30

    Protoplast technologies offer unique opportunities for fundamental research and to develop novel germplasm through somatic hybridization, organelle transfer, protoclonal variation, and direct insertion of DNA. Applying protoplast technologies to develop Dutch elm disease resistant American elms (Ulmus americana L.) was proposed over 30 years ago, but has not been achieved. A primary factor restricting protoplast technology to American elm is the resistance of the cell walls to enzymatic degradation and a long lag phase prior to cell wall re-synthesis and cell division. This study suggests that resistance to enzymatic degradation in American elm was due to water soluble phenylpropanoids. Incubating tobacco (Nicotiana tabacum L.) leaf tissue, an easily digestible species, in aqueous elm extract inhibits cell wall digestion in a dose dependent manner. This can be mimicked by p-coumaric or ferulic acid, phenylpropanoids known to re-enforce cell walls. Culturing American elm tissue in the presence of 2-aminoindane-2-phosphonic acid (AIP; 10-150 μM), an inhibitor of phenylalanine ammonia lyase (PAL), reduced flavonoid content, decreased tissue browning, and increased isolation rates significantly from 11.8% (±3.27) in controls to 65.3% (±4.60). Protoplasts isolated from callus grown in 100 μM AIP developed cell walls by day 2, had a division rate of 28.5% (±3.59) by day 6, and proliferated into callus by day 14. Heterokaryons were successfully produced using electrofusion and fused protoplasts remained viable when embedded in agarose. This study describes a novel approach of modifying phenylpropanoid biosynthesis to facilitate efficient protoplast isolation which has historically been problematic for American elm. This isolation system has facilitated recovery of viable protoplasts capable of rapid cell wall re-synthesis and sustained cell division to form callus. Further, isolated protoplasts survived electrofusion and viable heterokaryons were produced. Together, these results provide the first evidence of sustained cell division, callus regeneration, and potential application of somatic cell fusion in American elm, suggesting that this source of protoplasts may be ideal for genetic manipulation of this species. The technological advance made with American elm in this study has potential implications in other woody species for fundamental and applied research which require availability of viable protoplasts.

  12. Inhibition of phenylpropanoid biosynthesis increases cell wall digestibility, protoplast isolation, and facilitates sustained cell division in American elm (Ulmus americana)

    PubMed Central

    2012-01-01

    Background Protoplast technologies offer unique opportunities for fundamental research and to develop novel germplasm through somatic hybridization, organelle transfer, protoclonal variation, and direct insertion of DNA. Applying protoplast technologies to develop Dutch elm disease resistant American elms (Ulmus americana L.) was proposed over 30 years ago, but has not been achieved. A primary factor restricting protoplast technology to American elm is the resistance of the cell walls to enzymatic degradation and a long lag phase prior to cell wall re-synthesis and cell division. Results This study suggests that resistance to enzymatic degradation in American elm was due to water soluble phenylpropanoids. Incubating tobacco (Nicotiana tabacum L.) leaf tissue, an easily digestible species, in aqueous elm extract inhibits cell wall digestion in a dose dependent manner. This can be mimicked by p-coumaric or ferulic acid, phenylpropanoids known to re-enforce cell walls. Culturing American elm tissue in the presence of 2-aminoindane-2-phosphonic acid (AIP; 10-150 μM), an inhibitor of phenylalanine ammonia lyase (PAL), reduced flavonoid content, decreased tissue browning, and increased isolation rates significantly from 11.8% (±3.27) in controls to 65.3% (±4.60). Protoplasts isolated from callus grown in 100 μM AIP developed cell walls by day 2, had a division rate of 28.5% (±3.59) by day 6, and proliferated into callus by day 14. Heterokaryons were successfully produced using electrofusion and fused protoplasts remained viable when embedded in agarose. Conclusions This study describes a novel approach of modifying phenylpropanoid biosynthesis to facilitate efficient protoplast isolation which has historically been problematic for American elm. This isolation system has facilitated recovery of viable protoplasts capable of rapid cell wall re-synthesis and sustained cell division to form callus. Further, isolated protoplasts survived electrofusion and viable heterokaryons were produced. Together, these results provide the first evidence of sustained cell division, callus regeneration, and potential application of somatic cell fusion in American elm, suggesting that this source of protoplasts may be ideal for genetic manipulation of this species. The technological advance made with American elm in this study has potential implications in other woody species for fundamental and applied research which require availability of viable protoplasts. PMID:22646730

  13. Improving Efficiency of Passive RFID Tag Anti-Collision Protocol Using Dynamic Frame Adjustment and Optimal Splitting.

    PubMed

    Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma

    2018-04-12

    Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.

  14. Shortcuts to adiabaticity by counterdiabatic driving for trapped-ion displacement in phase space

    PubMed Central

    An, Shuoming; Lv, Dingshun; del Campo, Adolfo; Kim, Kihwan

    2016-01-01

    The application of adiabatic protocols in quantum technologies is severely limited by environmental sources of noise and decoherence. Shortcuts to adiabaticity by counterdiabatic driving constitute a powerful alternative that speed up time-evolution while mimicking adiabatic dynamics. Here we report the experimental implementation of counterdiabatic driving in a continuous variable system, a shortcut to the adiabatic transport of a trapped ion in phase space. The resulting dynamics is equivalent to a ‘fast-motion video' of the adiabatic trajectory. The robustness of this protocol is shown to surpass that of competing schemes based on classical local controls and Fourier optimization methods. Our results demonstrate that shortcuts to adiabaticity provide a robust speedup of quantum protocols of wide applicability in quantum technologies. PMID:27669897

  15. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    NASA Astrophysics Data System (ADS)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  16. Probabilistic Analysis of Hierarchical Cluster Protocols for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kaj, Ingemar

    Wireless sensor networks are designed to extract data from the deployment environment and combine sensing, data processing and wireless communication to provide useful information for the network users. Hundreds or thousands of small embedded units, which operate under low-energy supply and with limited access to central network control, rely on interconnecting protocols to coordinate data aggregation and transmission. Energy efficiency is crucial and it has been proposed that cluster based and distributed architectures such as LEACH are particularly suitable. We analyse the random cluster hierarchy in this protocol and provide a solution for low-energy and limited-loss optimization. Moreover, we extend these results to a multi-level version of LEACH, where clusters of nodes again self-organize to form clusters of clusters, and so on.

  17. Variability of United States Online Rehabilitation Protocols for Proximal Hamstring Tendon Repair.

    PubMed

    Lightsey, Harry M; Kantrowitz, David E; Swindell, Hasani W; Trofa, David P; Ahmad, Christopher S; Lynch, T Sean

    2018-02-01

    The optimal postoperative rehabilitation protocol following repair of complete proximal hamstring tendon ruptures is the subject of ongoing investigation, with a need for more standardized regimens and evidence-based modalities. To assess the variability across proximal hamstring tendon repair rehabilitation protocols published online by United States (US) orthopaedic teaching programs. Cross-sectional study. Online proximal hamstring physical therapy protocols from US academic orthopaedic programs were reviewed. A web-based search using the search term complete proximal hamstring repair rehabilitation protocol provided an additional 14 protocols. A comprehensive scoring rubric was developed after review of all protocols and was used to assess each protocol for both the presence of various rehabilitation components and the point at which those components were introduced. Of 50 rehabilitation protocols identified, 35 satisfied inclusion criteria and were analyzed. Twenty-five protocols (71%) recommended immediate postoperative bracing: 12 (34%) prescribed knee bracing, 8 (23%) prescribed hip bracing, and 5 (14%) did not specify the type of brace recommended. Fourteen protocols (40%) advised immediate nonweightbearing with crutches, while 16 protocols (46%) permitted immediate toe-touch weightbearing. Advancement to full weightbearing was allowed at a mean of 7.1 weeks (range, 4-12 weeks). Most protocols (80%) recommended gentle knee and hip passive range of motion and active range of motion, starting at a mean 1.4 weeks (range, 0-3 weeks) and 4.0 weeks (range, 0-6 weeks), respectively. However, only 6 protocols (17%) provided specific time points to initiate full hip and knee range of motion: a mean 8.0 weeks (range, 4-12 weeks) and 7.8 weeks (range, 0-12 weeks), respectively. Considerable variability was noted in the inclusion and timing of strengthening, stretching, proprioception, and cardiovascular exercises. Fifteen protocols (43%) required completion of specific return-to-sport criteria before resuming training. Marked variability is found in both the composition and timing of rehabilitation components across the various complete proximal hamstring repair rehabilitation protocols published online. This finding mirrors the variability of proposed rehabilitation protocols in the professional literature and represents an opportunity to improve patient care.

  18. Optimization of dose and image quality in adult and pediatric computed tomography scans

    NASA Astrophysics Data System (ADS)

    Chang, Kwo-Ping; Hsu, Tzu-Kun; Lin, Wei-Ting; Hsu, Wen-Lin

    2017-11-01

    Exploration to maximize CT image and reduce radiation dose was conducted while controlling for multiple factors. The kVp, mAs, and iteration reconstruction (IR), affect the CT image quality and radiation dose absorbed. The optimal protocols (kVp, mAs, IR) are derived by figure of merit (FOM) based on CT image quality (CNR) and CT dose index (CTDIvol). CT image quality metrics such as CT number accuracy, SNR, low contrast materials' CNR and line pair resolution were also analyzed as auxiliary assessments. CT protocols were carried out with an ACR accreditation phantom and a five-year-old pediatric head phantom. The threshold values of the adult CT scan parameters, 100 kVp and 150 mAs, were determined from the CT number test and line pairs in ACR phantom module 1and module 4 respectively. The findings of this study suggest that the optimal scanning parameters for adults be set at 100 kVp and 150-250 mAs. However, for improved low- contrast resolution, 120 kVp and 150-250 mAs are optimal. Optimal settings for pediatric head CT scan were 80 kVp/50 mAs, for maxillary sinus and brain stem, while 80 kVp /300 mAs for temporal bone. SNR is not reliable as the independent image parameter nor the metric for determining optimal CT scan parameters. The iteration reconstruction (IR) approach is strongly recommended for both adult and pediatric CT scanning as it markedly improves image quality without affecting radiation dose.

  19. Systems oncology: towards patient-specific treatment regimes informed by multiscale mathematical modelling.

    PubMed

    Powathil, Gibin G; Swat, Maciej; Chaplain, Mark A J

    2015-02-01

    The multiscale complexity of cancer as a disease necessitates a corresponding multiscale modelling approach to produce truly predictive mathematical models capable of improving existing treatment protocols. To capture all the dynamics of solid tumour growth and its progression, mathematical modellers need to couple biological processes occurring at various spatial and temporal scales (from genes to tissues). Because effectiveness of cancer therapy is considerably affected by intracellular and extracellular heterogeneities as well as by the dynamical changes in the tissue microenvironment, any model attempt to optimise existing protocols must consider these factors ultimately leading to improved multimodal treatment regimes. By improving existing and building new mathematical models of cancer, modellers can play important role in preventing the use of potentially sub-optimal treatment combinations. In this paper, we analyse a multiscale computational mathematical model for cancer growth and spread, incorporating the multiple effects of radiation therapy and chemotherapy in the patient survival probability and implement the model using two different cell based modelling techniques. We show that the insights provided by such multiscale modelling approaches can ultimately help in designing optimal patient-specific multi-modality treatment protocols that may increase patients quality of life. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A modular method for the extraction of DNA and RNA, and the separation of DNA pools from diverse environmental sample types

    PubMed Central

    Lever, Mark A.; Torti, Andrea; Eickenbusch, Philip; Michaud, Alexander B.; Šantl-Temkiv, Tina; Jørgensen, Bo Barker

    2015-01-01

    A method for the extraction of nucleic acids from a wide range of environmental samples was developed. This method consists of several modules, which can be individually modified to maximize yields in extractions of DNA and RNA or separations of DNA pools. Modules were designed based on elaborate tests, in which permutations of all nucleic acid extraction steps were compared. The final modular protocol is suitable for extractions from igneous rock, air, water, and sediments. Sediments range from high-biomass, organic rich coastal samples to samples from the most oligotrophic region of the world's oceans and the deepest borehole ever studied by scientific ocean drilling. Extraction yields of DNA and RNA are higher than with widely used commercial kits, indicating an advantage to optimizing extraction procedures to match specific sample characteristics. The ability to separate soluble extracellular DNA pools without cell lysis from intracellular and particle-complexed DNA pools may enable new insights into the cycling and preservation of DNA in environmental samples in the future. A general protocol is outlined, along with recommendations for optimizing this general protocol for specific sample types and research goals. PMID:26042110

  1. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction

    NASA Astrophysics Data System (ADS)

    Leverrier, Anthony

    2017-05-01

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U (n ) (instead of the symmetric group Sn as in usual de Finetti theorems), and by introducing generalized S U (2 ,2 ) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  2. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction.

    PubMed

    Leverrier, Anthony

    2017-05-19

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U(n) (instead of the symmetric group S_{n} as in usual de Finetti theorems), and by introducing generalized SU(2,2) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  3. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  4. Drug delivery optimization through Bayesian networks.

    PubMed Central

    Bellazzi, R.

    1992-01-01

    This paper describes how Bayesian Networks can be used in combination with compartmental models to plan Recombinant Human Erythropoietin (r-HuEPO) delivery in the treatment of anemia of chronic uremic patients. Past measurements of hematocrit or hemoglobin concentration in a patient during the therapy can be exploited to adjust the parameters of a compartmental model of the erythropoiesis. This adaptive process allows more accurate patient-specific predictions, and hence a more rational dosage planning. We describe a drug delivery optimization protocol, based on our approach. Some results obtained on real data are presented. PMID:1482938

  5. Whole-body computed tomography in trauma patients: optimization of the patient scanning position significantly shortens examination time while maintaining diagnostic image quality.

    PubMed

    Hickethier, Tilman; Mammadov, Kamal; Baeßler, Bettina; Lichtenstein, Thorsten; Hinkelbein, Jochen; Smith, Lucy; Plum, Patrick Sven; Chon, Seung-Hun; Maintz, David; Chang, De-Hua

    2018-01-01

    The study was conducted to compare examination time and artifact vulnerability of whole-body computed tomographies (wbCTs) for trauma patients using conventional or optimized patient positioning. Examination time was measured in 100 patients scanned with conventional protocol (Group A: arms positioned alongside the body for head and neck imaging and over the head for trunk imaging) and 100 patients scanned with optimized protocol (Group B: arms flexed on a chest pillow without repositioning). Additionally, influence of two different scanning protocols on image quality in the most relevant body regions was assessed by two blinded readers. Total wbCT duration was about 35% or 3:46 min shorter in B than in A. Artifacts in aorta (27 vs 6%), liver (40 vs 8%) and spleen (27 vs 5%) occurred significantly more often in B than in A. No incident of non-diagnostic image quality was reported, and no significant differences for lungs and spine were found. An optimized wbCT positioning protocol for trauma patients allows a significant reduction of examination time while still maintaining diagnostic image quality.

  6. Rationally optimized cryopreservation of multiple mouse embryonic stem cell lines: I--Comparative fundamental cryobiology of multiple mouse embryonic stem cell lines and the implications for embryonic stem cell cryopreservation protocols.

    PubMed

    Kashuba, Corinna M; Benson, James D; Critser, John K

    2014-04-01

    The post-thaw recovery of mouse embryonic stem cells (mESCs) is often assumed to be adequate with current methods. However as this publication will show, this recovery of viable cells actually varies significantly by genetic background. Therefore there is a need to improve the efficiency and reduce the variability of current mESC cryopreservation methods. To address this need, we employed the principles of fundamental cryobiology to improve the cryopreservation protocol of four mESC lines from different genetic backgrounds (BALB/c, CBA, FVB, and 129R1 mESCs) through a comparative study characterizing the membrane permeability characteristics and membrane integrity osmotic tolerance limits of each cell line. In the companion paper, these values were used to predict optimal cryoprotectants, cooling rates, warming rates, and plunge temperatures, and then these predicted optimal protocols were validated against standard freezing protocols. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Optimization of ultrasound-assisted extraction of charantin from Momordica charantia fruits using response surface methodology

    PubMed Central

    Ahamad, Javed; Amin, Saima; Mir, Showkat R.

    2015-01-01

    Background: Momordica charantia Linn. (Cucurbitaceae) fruits are well known for their beneficial effects in diabetes that are often attributed to its bioactive component charantin. Objective: The aim of the present study is to develop and optimize an efficient protocol for the extraction of charantin from M. charantia fruits. Materials and Methods: Response surface methodology (RSM) was used for the optimization of ultrasound-assisted extraction (UAE) conditions. RSM was based on a three-level, three-variable Box-Behnken design (BBD), and the studied variables included solid to solvent ratio, extraction temperature, and extraction time. Results: The optimal conditions predicted by the BBD were: UAE with methanol: Water (80:20, v/v) at 46°C for 120 min with solid to solvent ratio of 1:26 w/v, under which the yield of charantin was 3.18 mg/g. Confirmation trials under slightly adjusted conditions yielded 3.12 ± 0.14 mg/g of charantin on dry weight basis of fruits. The result of UAE was also compared with Soxhlet extraction method and UAE was found 2.74-fold more efficient than the Soxhlet extraction for extracting charantin. Conclusions: A facile UAE protocol for a high extraction yield of charantin was developed and validated. PMID:26681889

  8. Triage and protocol recommendations for the parasitology laboratory based on an epidemiological investigation of parasite diagnostics in Ontario laboratories

    PubMed Central

    Maier, Allison; Krolik, Julia; Majury, Anna

    2014-01-01

    OBJECTIVES: A study was performed using a subset of Ontario laboratory parasitology data, with three objectives: to describe parasitic infections in Ontario; to identify risk factors for acquiring a parasitic infection using routinely collected information; and to use this information to assess current protocols for parasite testing in laboratories and, in turn, to propose alternatives to optimize the allocation of laboratory resources. METHODS: All parasitology records from January 4, 2010 to September 14, 2010 were reviewed descriptively and risk factor analyses were performed using information collected from requisitions. These results were used to develop preliminary alternative protocols, which considered high-throughput screening tests and inclusion/exclusion criteria for ova and parasite testing; these were then retrospectively analyzed with the dataset to determine appropriateness. RESULTS: Of the 29,260 records analyzed, 10% were multiple samples from single patients submitted on the same day, of which 98% had the same result. Three percent of all parasite tests were positive, with the most prevalent parasites being (in ascending order) Dientamoeba fragilis, Giardia lamblia, Cryptosporidium species and Entamoeba histolytica/dispar. Age and sex were found to be weak risk factors, while rural living was found to be a moderate risk factor for D fragilis, G lamblia and Cryptosporidium infections. The strongest risk factor was travel history, especially for nonendemic parasites. The retrospective analysis of six alternative protocols identified four that may be more efficient than current procedures. CONCLUSIONS: The present study demonstrated that current protocols may be redundant and can be optimized to target prevalent parasites and populations with high risk factors. PMID:25587292

  9. TH-E-209-02: Dose Monitoring and Protocol Optimization: The Pediatric Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, R.

    Radiation dose monitoring solutions have opened up new opportunities for medical physicists to be more involved in modern clinical radiology practices. In particular, with the help of comprehensive radiation dose data, data-driven protocol management and informed case follow up are now feasible. Significant challenges remain however and the problems faced by medical physicists are highly heterogeneous. Imaging systems from multiple vendors and a wide range of vintages co-exist in the same department and employ data communication protocols that are not fully standardized or implemented making harmonization complex. Many different solutions for radiation dose monitoring have been implemented by imaging facilitiesmore » over the past few years. Such systems are based on commercial software, home-grown IT solutions, manual PACS data dumping, etc., and diverse pathways can be used to bring the data to impact clinical practice. The speakers will share their experiences with creating or tailoring radiation dose monitoring/management systems and procedures over the past few years, which vary significantly in design and scope. Topics to cover: (1) fluoroscopic dose monitoring and high radiation event handling from a large academic hospital; (2) dose monitoring and protocol optimization in pediatric radiology; and (3) development of a home-grown IT solution and dose data analysis framework. Learning Objectives: Describe the scope and range of radiation dose monitoring and protocol management in a modern radiology practice Review examples of data available from a variety of systems and how it managed and conveyed. Reflect on the role of the physicist in radiation dose awareness.« less

  10. When tractography meets tracer injections: a systematic study of trends and variation sources of diffusion-based connectivity.

    PubMed

    Aydogan, Dogu Baran; Jacobs, Russell; Dulawa, Stephanie; Thompson, Summer L; Francois, Maite Christi; Toga, Arthur W; Dong, Hongwei; Knowles, James A; Shi, Yonggang

    2018-04-16

    Tractography is a powerful technique capable of non-invasively reconstructing the structural connections in the brain using diffusion MRI images, but the validation of tractograms is challenging due to lack of ground truth. Owing to recent developments in mapping the mouse brain connectome, high-resolution tracer injection-based axonal projection maps have been created and quickly adopted for the validation of tractography. Previous studies using tracer injections mainly focused on investigating the match in projections and optimal tractography protocols. Being a complicated technique, however, tractography relies on multiple stages of operations and parameters. These factors introduce large variabilities in tractograms, hindering the optimization of protocols and making the interpretation of results difficult. Based on this observation, in contrast to previous studies, in this work we focused on quantifying and ranking the amount of performance variation introduced by these factors. For this purpose, we performed over a million tractography experiments and studied the variability across different subjects, injections, anatomical constraints and tractography parameters. By using N-way ANOVA analysis, we show that all tractography parameters are significant and importantly performance variations with respect to the differences in subjects are comparable to the variations due to tractography parameters, which strongly underlines the importance of fully documenting the tractography protocols in scientific experiments. We also quantitatively show that inclusion of anatomical constraints is the most significant factor for improving tractography performance. Although this critical factor helps reduce false positives, our analysis indicates that anatomy-informed tractography still fails to capture a large portion of axonal projections.

  11. Space-based Networking Technology Developments in the Interplanetary Network Directorate Information Technology Program

    NASA Technical Reports Server (NTRS)

    Clare, Loren; Clement, B.; Gao, J.; Hutcherson, J.; Jennings, E.

    2006-01-01

    Described recent development of communications protocols, services, and associated tools targeted to reduce risk, reduce cost and increase efficiency of IND infrastructure and supported mission operations. Space-based networking technologies developed were: a) Provide differentiated quality of service (QoS) that will give precedence to traffic that users have selected as having the greatest importance and/or time-criticality; b) Improve the total value of information to users through the use of QoS prioritization techniques; c) Increase operational flexibility and improve command-response turnaround; d) Enable new class of networked and collaborative science missions; e) Simplify applications interfaces to communications services; and f) Reduce risk and cost from a common object model and automated scheduling and communications protocols. Technologies are described in three general areas: communications scheduling, middleware, and protocols. Additionally developed simulation environment, which provides comprehensive, quantitative understanding of the technologies performance within overall, evolving architecture, as well as ability to refine & optimize specific components.

  12. Sequence optimization to reduce velocity offsets in cardiovascular magnetic resonance volume flow quantification - A multi-vendor study

    PubMed Central

    2011-01-01

    Purpose Eddy current induced velocity offsets are of concern for accuracy in cardiovascular magnetic resonance (CMR) volume flow quantification. However, currently known theoretical aspects of eddy current behavior have not led to effective guidelines for the optimization of flow quantification sequences. This study is aimed at identifying correlations between protocol parameters and the resulting velocity error in clinical CMR flow measurements in a multi-vendor study. Methods Nine 1.5T scanners of three different types/vendors were studied. Measurements were performed on a large stationary phantom. Starting from a clinical breath-hold flow protocol, several protocol parameters were varied. Acquisitions were made in three clinically relevant orientations. Additionally, a time delay between the bipolar gradient and read-out, asymmetric versus symmetric velocity encoding, and gradient amplitude and slew rate were studied in adapted sequences as exploratory measurements beyond the protocol. Image analysis determined the worst-case offset for a typical great-vessel flow measurement. Results The results showed a great variation in offset behavior among scanners (standard deviation among samples of 0.3, 0.4, and 0.9 cm/s for the three different scanner types), even for small changes in the protocol. Considering the absolute values, none of the tested protocol settings consistently reduced the velocity offsets below the critical level of 0.6 cm/s neither for all three orientations nor for all three scanner types. Using multilevel linear model analysis, oblique aortic and pulmonary slices showed systematic higher offsets than the transverse aortic slices (oblique aortic 0.6 cm/s, and pulmonary 1.8 cm/s higher than transverse aortic). The exploratory measurements beyond the protocol yielded some new leads for further sequence development towards reduction of velocity offsets; however those protocols were not always compatible with the time-constraints of breath-hold imaging and flow-related artefacts. Conclusions This study showed that with current systems there was no generic protocol which resulted into acceptable flow offset values. Protocol optimization would have to be performed on a per scanner and per protocol basis. Proper optimization might make accurate (transverse) aortic flow quantification possible for most scanners. Pulmonary flow quantification would still need further (offline) correction. PMID:21388521

  13. Shuffle Optimizer: A Program to Optimize DNA Shuffling for Protein Engineering.

    PubMed

    Milligan, John N; Garry, Daniel J

    2017-01-01

    DNA shuffling is a powerful tool to develop libraries of variants for protein engineering. Here, we present a protocol to use our freely available and easy-to-use computer program, Shuffle Optimizer. Shuffle Optimizer is written in the Python computer language and increases the nucleotide homology between two pieces of DNA desired to be shuffled together without changing the amino acid sequence. In addition we also include sections on optimal primer design for DNA shuffling and library construction, a small-volume ultrasonicator method to create sheared DNA, and finally a method to reassemble the sheared fragments and recover and clone the library. The Shuffle Optimizer program and these protocols will be useful to anyone desiring to perform any of the nucleotide homology-dependent shuffling methods.

  14. Genetic Algorithm Application in Optimization of Wireless Sensor Networks

    PubMed Central

    Norouzi, Ali; Zaim, A. Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  15. Improvement of electroporation to deliver plasmid DNA into dental follicle cells

    PubMed Central

    Yao, Shaomian; Rana, Samir; Liu, Dawen; Wise, Gary E.

    2010-01-01

    Electroporation DNA transfer is a simple and versatile approach to deliver genes. To develop an optimal electroporation protocol to deliver DNA into cells, we conducted square wave electroporation experiments with using rat dental follicle cells as follows: 1) the cells were electroporated at different electric field strengths with lac Z plasmid; 2) plasmid concentrations were tested to determine the optimal doses; 3) various concentrations of bovine serum albumin or fetal bovine serum were added to the pulsing buffer; and, 4) the pulsing durations were studied to determine the optimal duration. These experiments indicated that the optimal electroporation electric field strength was 375 V/cm, and that plasmid concentrations greater than 0.18 μg/μl were required to achieve high transfection efficiency. BSA or FBS in the pulsing buffer significantly improved cell survival and increased the number of transfected cells. The optimal pulsing duration was in the range of 45 to 120 milliseconds (ms) at 375 V/cm. Thus, an improved electroporation protocol was established by optimizing the above parameters. In turn, this electroporation protocol can be used to deliver DNA into dental follicle cells to study the roles of candidate genes in regulating tooth eruption. PMID:19830717

  16. Riparian buffer design guidelines for water quality and wildlife habitat functions on agricultural landscapes in the Intermountain West

    Treesearch

    Craig W. Johnson; Susan Buffler

    2008-01-01

    Intermountain West planners, designers, and resource managers are looking for science-based procedures for determining buffer widths and management techniques that will optimize the benefits riparian ecosystems provide. This study reviewed the riparian buffer literature, including protocols used to determine optimum buffer widths for water quality and wildlife habitat...

  17. A Survey on an Energy-Efficient and Energy-Balanced Routing Protocol for Wireless Sensor Networks.

    PubMed

    Ogundile, Olayinka O; Alfa, Attahiru S

    2017-05-10

    Wireless sensor networks (WSNs) form an important part of industrial application. There has been growing interest in the potential use of WSNs in applications such as environment monitoring, disaster management, health care monitoring, intelligence surveillance and defence reconnaissance. In these applications, the sensor nodes (SNs) are envisaged to be deployed in sizeable numbers in an outlying area, and it is quite difficult to replace these SNs after complete deployment in many scenarios. Therefore, as SNs are predominantly battery powered devices, the energy consumption of the nodes must be properly managed in order to prolong the network lifetime and functionality to a rational time. Different energy-efficient and energy-balanced routing protocols have been proposed in literature over the years. The energy-efficient routing protocols strive to increase the network lifetime by minimizing the energy consumption in each SN. On the other hand, the energy-balanced routing protocols protract the network lifetime by uniformly balancing the energy consumption among the nodes in the network. There have been various survey papers put forward by researchers to review the performance and classify the different energy-efficient routing protocols for WSNs. However, there seems to be no clear survey emphasizing the importance, concepts, and principles of load-balanced energy routing protocols for WSNs. In this paper, we provide a clear picture of both the energy-efficient and energy-balanced routing protocols for WSNs. More importantly, this paper presents an extensive survey of the different state-of-the-art energy-efficient and energy-balanced routing protocols. A taxonomy is introduced in this paper to classify the surveyed energy-efficient and energy-balanced routing protocols based on their proposed mode of communication towards the base station (BS). In addition, we classified these routing protocols based on the solution types or algorithms, and the input decision variables defined in the routing algorithm. The strengths and weaknesses of the choice of the decision variables used in the design of these energy-efficient and energy-balanced routing protocols are emphasised. Finally, we suggest possible research directions in order to optimize the energy consumption in sensor networks.

  18. A Survey on an Energy-Efficient and Energy-Balanced Routing Protocol for Wireless Sensor Networks

    PubMed Central

    Ogundile, Olayinka O.; Alfa, Attahiru S.

    2017-01-01

    Wireless sensor networks (WSNs) form an important part of industrial application. There has been growing interest in the potential use of WSNs in applications such as environment monitoring, disaster management, health care monitoring, intelligence surveillance and defence reconnaissance. In these applications, the sensor nodes (SNs) are envisaged to be deployed in sizeable numbers in an outlying area, and it is quite difficult to replace these SNs after complete deployment in many scenarios. Therefore, as SNs are predominantly battery powered devices, the energy consumption of the nodes must be properly managed in order to prolong the network lifetime and functionality to a rational time. Different energy-efficient and energy-balanced routing protocols have been proposed in literature over the years. The energy-efficient routing protocols strive to increase the network lifetime by minimizing the energy consumption in each SN. On the other hand, the energy-balanced routing protocols protract the network lifetime by uniformly balancing the energy consumption among the nodes in the network. There have been various survey papers put forward by researchers to review the performance and classify the different energy-efficient routing protocols for WSNs. However, there seems to be no clear survey emphasizing the importance, concepts, and principles of load-balanced energy routing protocols for WSNs. In this paper, we provide a clear picture of both the energy-efficient and energy-balanced routing protocols for WSNs. More importantly, this paper presents an extensive survey of the different state-of-the-art energy-efficient and energy-balanced routing protocols. A taxonomy is introduced in this paper to classify the surveyed energy-efficient and energy-balanced routing protocols based on their proposed mode of communication towards the base station (BS). In addition, we classified these routing protocols based on the solution types or algorithms, and the input decision variables defined in the routing algorithm. The strengths and weaknesses of the choice of the decision variables used in the design of these energy-efficient and energy-balanced routing protocols are emphasised. Finally, we suggest possible research directions in order to optimize the energy consumption in sensor networks. PMID:28489054

  19. Wireless Sensor Network Quality of Service Improvement on Flooding Attack Condition

    NASA Astrophysics Data System (ADS)

    Hartono, R.; Widyawan; Wibowo, S. B.; Purnomo, A.; Hartatik

    2018-03-01

    There are two methods of building communication using wireless media. The first method is building a base infrastructure as an intermediary between users. Problems that arise on this type of network infrastructure is limited space to build any network physical infrastructure and also the cost factor. The second method is to build an ad hoc network between users who will communicate. On ad hoc network, each user must be willing to send data from source to destination for the occurrence of a communication. One of network protocol in Ad Hoc, Ad hoc on demand Distance Vector (AODV), has the smallest overhead value, easier to adapt to dynamic network and has small control message. One AODV protocol’s drawback is route finding process’ security for sending the data. In this research, AODV protocol is optimized by determining Expanding Ring Search (ERS) best value. Random topology is used with variation in the number of nodes: 25, 50, 75, 100, 125 and 150 with node’s speed of 10m/s in the area of 1000m x 1000m on flooding network condition. Parameters measured are Throughput, Packet Delivery Ratio, Average Delay and Normalized Routing Load. From the test results of AODV protocol optimization with best value of Expanding Ring Search (ERS), throughput increased by 5.67%, packet delivery ratio increased by 5.73%, and as for Normalized Routing Load decreased by 4.66%. ERS optimal value for each node’s condition depending on the number of nodes on the network.

  20. Individual Optimal Frequency in Whole-Body Vibration: Effect of Protocol, Joint Angle, and Fatiguing Exercise.

    PubMed

    Carlucci, Flaminia; Felici, Francesco; Piccinini, Alberto; Haxhi, Jonida; Sacchetti, Massimo

    2016-12-01

    Carlucci, F, Felici, F, Piccinini, A, Haxhi, J, and Sacchetti, M. Individual optimal frequency in whole-body vibration: effect of protocol, joint angle, and fatiguing exercise. J Strength Cond Res 30(12): 3503-3511, 2016-Recent studies have shown the importance of individualizing the vibration intervention to produce greater effects on the neuromuscular system in less time. The purpose of this study was to assess the individual optimal vibration frequency (OVF) corresponding to the highest muscle activation (RMSmax) during vibration at different frequencies, comparing different protocols. Twenty-nine university students underwent 3 continuous (C) and 2 random (R) different vibrating protocols, maintaining a squat position on a vibration platform. The C protocol lasted 50 seconds and involved the succession of ascending frequencies from 20 to 55 Hz, every 5 seconds. The same protocol was performed twice, having the knee angle at 120° (C) and 90° (C90), to assess the effect of joint angle and after a fatiguing squatting exercise (CF) to evaluate the influence of fatigue on OVF assessment. In the random protocols, vibration time was 20 seconds with a 2-minute (R2) and a 4-minute (R4) pauses between tested frequencies. Muscle activation and OVF values did not differ significantly in the C, R2, and R4 protocols. RMSmax was higher in C90 (p < 0.001) and in CF (p = 0.04) compared with the C protocol. Joint angle and fatiguing exercise had no effect on OVF. In conclusion, the shorter C protocol produced similar myoelectrical activity in the R2 and the R4 protocols, and therefore, it could be equally valid in identifying the OVF with considerable time efficiency. Knee joint angle and fatiguing exercise had an effect on surface electromyography response during vibration but did not affect OVF identification significantly.

  1. SU-F-J-16: Planar KV Imaging Dose Reduction Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gershkevitsh, E; Zolotuhhin, D

    Purpose: IGRT has become an indispensable tool in modern radiotherapy with kV imaging used in many departments due to superior image quality and lower dose when compared to MV imaging. Many departments use manufacturer supplied protocols for imaging which are not always optimised between image quality and radiation dose (ALARA). Methods: Whole body phantom PBU-50 (Kyoto Kagaku ltd., Japan) for imaging in radiology has been imaged on Varian iX accelerator (Varian Medical Systems, USA) with OBI 1.5 system. Manufacturer’s default protocols were adapted by modifying kV and mAs values when imaging different anatomical regions of the phantom (head, thorax, abdomen,more » pelvis, extremities). Images with different settings were independently reviewed by two persons and their suitability for IGRT set-up correction protocols were evaluated. The suitable images with the lowest mAs were then selected. The entrance surface dose (ESD) for manufacturer’s default protocols and modified protocols were measured with RTI Black Piranha (RTI Group, Sweden) and compared. Image quality was also measured with kVQC phantom (Standard Imaging, USA) for different protocols. The modified protocols have been applied for clinical work. Results: For most cases optimized protocols reduced the ESD on average by a factor of 3(range 0.9–8.5). Further reduction in ESD has been observed by applying bow-tie filter designed for CBCT. The largest reduction in dose (12.2 times) was observed for Thorax lateral protocol. The dose was slightly increased (by 10%) for large pelvis AP protocol. Conclusion: Manufacturer’s default IGRT protocols could be optimised to reduce the ESD to the patient without losing the necessary image quality for patient set-up correction. For patient set-up with planar kV imaging the bony anatomy is mostly used and optimization should focus on this aspect. Therefore, the current approach with anthropomorphic phantom is more advantageous in optimization over standard kV quality control phantoms and SNR metrics.« less

  2. Thermodynamic metrics and optimal paths.

    PubMed

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  3. A study of the dependence of protocol optimization on the left ventricular ejection fraction (LVEF) in coronary CT angiography (CCTA) examination

    NASA Astrophysics Data System (ADS)

    Yu, Dong-Su; Cho, Jae-Hwan; Park, Cheol-Soo; Yoo, Heung-Joon; Choi, Cheon-Woong; Kim, Dae-Hyun

    2013-10-01

    The purpose of this study is to obtain a good quality image and to minimize patient doses and re-examination rates through an optimization of the protocol for coronary computed tomography angiography (CCTA) examination based on a comparison and an analysis of the heart rates (HRs) of patients who had left ventricular ejection fraction (LVEF) values of less than 40% and the HRs of ordinary patients. This study targeted 16 patients who received thallium single-photon emission computed tomography (SPECT) or echocardiography simultaneously among the patients who took the CCTA examinations. Depending on the LVEF value (30 ˜ 39, 40 ˜ 49, 50 ˜ 59, and 60% or above), the patients were divided into groups of four based on HR (50 ˜ 59, 60 ˜ 69, 70 ˜ 79, and 80 or above). DynEva software was used to set the region of interest (ROI) on the ascending aorta and for a measurement of the threshold value. Comparisons and analyses were made based on the LVEF values and the HRs, after which the results were compared with the ones from the existing examination protocols and contrast medium protocols. According to the study results, the relation between the HR and the LVEF demonstrated that it took a long time to reach the true 100 hounsfield unit (HU) when the LVEF was 40% or below. Contrasting media showed significant differences, except in the case where the HR was 80 or above, and/or the LVEF was less than 40%. Moreover, for an LVEF of less than 40%, time differences were significant when contrasting media reached the true 100 HU to begin the scanning process. Therefore, it was possible to predict that the contrasting media were already being washed out from the left ventricle.

  4. Performance Analysis of TCP Enhancements in Satellite Data Networks

    NASA Technical Reports Server (NTRS)

    Broyles, Ren H.

    1999-01-01

    This research examines two proposed enhancements to the well-known Transport Control Protocol (TCP) in the presence of noisy communication links. The Multiple Pipes protocol is an application-level adaptation of the standard TCP protocol, where several TCP links cooperate to transfer data. The Space Communication Protocol Standard - Transport Protocol (SCPS-TP) modifies TCP to optimize performance in a satellite environment. While SCPS-TP has inherent advantages that allow it to deliver data more rapidly than Multiple Pipes, the protocol, when optimized for operation in a high-error environment, is not compatible with legacy TCP systems, and requires changes to the TCP specification. This investigation determines the level of improvement offered by SCPS-TP's Corruption Mode, which will help determine if migration to the protocol is appropriate in different environments. As the percentage of corrupted packets approaches 5 %, Multiple Pipes can take over five times longer than SCPS-TP to deliver data. At high error rates, SCPS-TP's advantage is primarily caused by Multiple Pipes' use of congestion control algorithms. The lack of congestion control, however, limits the systems in which SCPS-TP can be effectively used.

  5. Relating quantum privacy and quantum coherence: an operational approach.

    PubMed

    Devetak, I; Winter, A

    2004-08-20

    Given many realizations of a state or a channel as a resource, two parties can generate a secret key as well as entanglement. We describe protocols to perform the secret key distillation (as it turns out, with optimal rate). Then we show how to achieve optimal entanglement generation rates by "coherent" implementation of a class of secret key agreement protocols, proving the long-conjectured "hashing inequality."

  6. Rethinking Traffic Management: Design of Optimizable Networks

    DTIC Science & Technology

    2008-06-01

    Though this paper used optimization theory to design and analyze DaVinci , op- timization theory is one of many possible tools to enable a grounded...dynamically allocate bandwidth shares. The distributed protocols can be implemented using DaVinci : Dynamically Adaptive VIrtual Networks for a Customized...Internet. In DaVinci , each virtual network runs traffic-management protocols optimized for a traffic class, and link bandwidth is dynamically allocated

  7. Quality, efficiency, and cost of a physician-assistant-protocol system for managment of diabetes and hypertension.

    PubMed

    Komaroff, A L; Flatley, M; Browne, C; Sherman, H; Fineberg, S E; Knopp, R H

    1976-04-01

    Briefly trained physicians assistants using protocols (clinical algorithms) for diabetes, hypertension, and related chronic arteriosclerotic and hypertensive heart disease abstrated information from the medical record and obtained history and physical examination data on every patient-visit to a city hospital chronic disease clinic over a 18-month period. The care rendered by the protocol system was compared with care rendered by a "traditional" system in the same clinic in which physicians delegated few clinical tasks. Increased thoroughness in collecting clinical data in the protocol system led to an increase in the recognition of new pathology. Outcome criteria reflected equivalent quality of care in both groups. Efficiency time-motion studies demonstrated a 20 per cent saving in physician time with the protocol system. Coct estimates, based on the time spent with patients by various providers and on the laboratory-test-ordering patterns, demonstrated equivalent costs of the two systems, given optimal staffing patterns. Laboratory tests were a major element of the cost of patient care,and the clinical yield per unit cost of different tests varied widely.

  8. Complete Prevention of Dendrite Formation in Zn Metal Anodes by Means of Pulsed Charging Protocols.

    PubMed

    Garcia, Grecia; Ventosa, Edgar; Schuhmann, Wolfgang

    2017-06-07

    Zn metal as anode in rechargeable batteries, such as Zn/air or Zn/Ni, suffers from poor cyclability. The formation of Zn dendrites upon cycling is the key limiting step. We report a systematic study of the influence of pulsed electroplating protocols on the formation of Zn dendrites and in turn on strategies to completely prevent Zn dendrite formation. Because of the large number of variables in electroplating protocols, a scanning droplet cell technique was adapted as a high-throughput methodology in which a descriptor of the surface roughness can be in situ derived by means of electrochemical impedance spectroscopy. Upon optimizing the electroplating protocol by controlling nucleation, zincate ion depletion, and zincate ion diffusion, scanning electron microscopy and atomic force microscopy confirmed the growth of uniform and homogenous Zn deposits with a complete prevention of dendrite growth. The implementation of pulsed electroplating as the charging protocol for commercially available Ni-Zn batteries leads to substantially prolonged cyclability demonstrating the benefits of pulsed charging in Zn metal-based batteries.

  9. Rosetta:MSF: a modular framework for multi-state computational protein design.

    PubMed

    Löffler, Patrick; Schmitz, Samuel; Hupfeld, Enrico; Sterner, Reinhard; Merkl, Rainer

    2017-06-01

    Computational protein design (CPD) is a powerful technique to engineer existing proteins or to design novel ones that display desired properties. Rosetta is a software suite including algorithms for computational modeling and analysis of protein structures and offers many elaborate protocols created to solve highly specific tasks of protein engineering. Most of Rosetta's protocols optimize sequences based on a single conformation (i. e. design state). However, challenging CPD objectives like multi-specificity design or the concurrent consideration of positive and negative design goals demand the simultaneous assessment of multiple states. This is why we have developed the multi-state framework MSF that facilitates the implementation of Rosetta's single-state protocols in a multi-state environment and made available two frequently used protocols. Utilizing MSF, we demonstrated for one of these protocols that multi-state design yields a 15% higher performance than single-state design on a ligand-binding benchmark consisting of structural conformations. With this protocol, we designed de novo nine retro-aldolases on a conformational ensemble deduced from a (βα)8-barrel protein. All variants displayed measurable catalytic activity, testifying to a high success rate for this concept of multi-state enzyme design.

  10. Rosetta:MSF: a modular framework for multi-state computational protein design

    PubMed Central

    Hupfeld, Enrico; Sterner, Reinhard

    2017-01-01

    Computational protein design (CPD) is a powerful technique to engineer existing proteins or to design novel ones that display desired properties. Rosetta is a software suite including algorithms for computational modeling and analysis of protein structures and offers many elaborate protocols created to solve highly specific tasks of protein engineering. Most of Rosetta’s protocols optimize sequences based on a single conformation (i. e. design state). However, challenging CPD objectives like multi-specificity design or the concurrent consideration of positive and negative design goals demand the simultaneous assessment of multiple states. This is why we have developed the multi-state framework MSF that facilitates the implementation of Rosetta’s single-state protocols in a multi-state environment and made available two frequently used protocols. Utilizing MSF, we demonstrated for one of these protocols that multi-state design yields a 15% higher performance than single-state design on a ligand-binding benchmark consisting of structural conformations. With this protocol, we designed de novo nine retro-aldolases on a conformational ensemble deduced from a (βα)8-barrel protein. All variants displayed measurable catalytic activity, testifying to a high success rate for this concept of multi-state enzyme design. PMID:28604768

  11. A Family of Quantum Protocols

    NASA Astrophysics Data System (ADS)

    Devetak, Igor; Harrow, Aram W.; Winter, Andreas

    2004-12-01

    We introduce three new quantum protocols involving noisy quantum channels and entangled states, and relate them operationally and conceptually with four well-known old protocols. Two of the new protocols (the mother and father) can generate the other five “child” protocols by direct application of teleportation and superdense coding, and can be derived in turn by making the old protocols “coherent.” This gives very simple proofs for two famous old protocols (the hashing inequality and quantum channel capacity) and provides the basis for optimal trade-off curves in several quantum information processing tasks.

  12. Unification of quantum information theory

    NASA Astrophysics Data System (ADS)

    Abeyesinghe, Anura

    We present the unification of many previously disparate results in noisy quantum Shannon theory and the unification of all of noiseless quantum Shannon theory. More specifically we deal here with bipartite, unidirectional, and memoryless quantum Shannon theory. We find all the optimal protocols and quantify the relationship between the resources used, both for the one-shot and for the ensemble case, for what is arguably the most fundamental task in quantum information theory: sharing entangled states between a sender and a receiver. We find that all of these protocols are derived from our one-shot superdense coding protocol and relate nicely to each other. We then move on to noisy quantum information theory and give a simple, direct proof of the "mother" protocol, or rather her generalization to the Fully Quantum Slepian-Wolf protocol (FQSW). FQSW simultaneously accomplishes two goals: quantum communication-assisted entanglement distillation, and state transfer from the sender to the receiver. As a result, in addition to her other "children," the mother protocol generates the state merging primitive of Horodecki, Oppenheim, and Winter as well as a new class of distributed compression protocols for correlated quantum sources, which are optimal for sources described by separable density operators. Moreover, the mother protocol described here is easily transformed into the so-called "father" protocol, demonstrating that the division of single-sender/single-receiver protocols into two families was unnecessary: all protocols in the family are children of the mother.

  13. CT dose minimization using personalized protocol optimization and aggressive bowtie

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Yin, Zhye; Jin, Yannan; Wu, Mingye; Yao, Yangyang; Tao, Kun; Kalra, Mannudeep K.; De Man, Bruno

    2016-03-01

    In this study, we propose to use patient-specific x-ray fluence control to reduce the radiation dose to sensitive organs while still achieving the desired image quality (IQ) in the region of interest (ROI). The mA modulation profile is optimized view by view, based on the sensitive organs and the ROI, which are obtained from an ultra-low-dose volumetric CT scout scan [1]. We use a clinical chest CT scan to demonstrate the feasibility of the proposed concept: the breast region is selected as the sensitive organ region while the cardiac region is selected as IQ ROI. Two groups of simulations are performed based on the clinical CT dataset: (1) a constant mA scan adjusted based on the patient attenuation (120 kVp, 300 mA), which serves as baseline; (2) an optimized scan with aggressive bowtie and ROI centering combined with patient-specific mA modulation. The results shows that the combination of the aggressive bowtie and the optimized mA modulation can result in 40% dose reduction in the breast region, while the IQ in the cardiac region is maintained. More generally, this paper demonstrates the general concept of using a 3D scout scan for optimal scan planning.

  14. The Deployment of Routing Protocols in Distributed Control Plane of SDN

    PubMed Central

    Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu

    2014-01-01

    Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies. PMID:25250395

  15. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-06

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  16. Physical aquatic habitat assessment data, Ozark plateaus, Missouri and Arkansas

    USGS Publications Warehouse

    Jacobson, Robert B.; Johnson, Harold E.; Reuter, Joanna M.; Wright, Maria Panfil

    2004-01-01

    This report presents data from two related studies on physical habitat in small streams in the Ozark Plateaus Physiographic Province of Missouri and Arkansas. Seventy stream reaches and their contributing drainage basins were assessed using a physical habitat protocol designed to optimize understanding of how stream reach characteristics relate to drainage-basin characteristics. Drainage-basin characteristics were evaluated using geographic information system (GIS) techniques and datasets designed to evaluate the geologic, physiographic, and land-use characteristics of encompassing drainage basins. Reach characteristics were evaluated using a field-based geomorphology and habitat protocol. The data are intended to complement ecological studies on Ozark Plateaus streams.

  17. Pilonidal Sinus Disease: 10 Steps to Optimize Care.

    PubMed

    Harris, Connie; Sibbald, R Gary; Mufti, Asfandyar; Somayaji, Ranjani

    2016-10-01

    To present a 10-step approach to the assessment and treatment of pilonidal sinus disease (PSD) and related wounds based on the Harris protocol, expert opinion, and a current literature review. This continuing education activity is intended for physicians and nurses with an interest in skin and wound care. After participating in this educational activity, the participant should be better able to: Pilonidal sinus disease (PSD) is a common problem in young adults and particularly in males with a deep natal or intergluteal cleft and coarse body hair. An approach to an individual with PSD includes the assessment of pain, activities of daily living, the pilonidal sinus, and natal cleft. Local wound care includes the management of infection (if present), along with appropriate debridement and moisture management. Treatment is optimized with patient empowerment to manage the wound and periwound environment (cleansing, dressing changes, decontamination, hair removal, minimizing friction). Self-care education includes the recognition of recurrences or infection. Early surgical intervention of these wounds is often necessary for successful outcomes. Pilonidal sinus healing by secondary intention often takes weeks to months; however, the use of the Harris protocol may decrease healing times. A number of new surgical approaches may accelerate healing. Surgical closure by primary intention is often associated with higher recurrence rates. Expert opinion in this article is combined with an evidence-based literature review. The authors have tabulated 10 key steps from the Harris protocol, including a review of the surgical techniques to improve PSD patient outcomes.

  18. A novel immuno-gold labeling protocol for nanobody-based detection of HER2 in breast cancer cells using immuno-electron microscopy.

    PubMed

    Kijanka, M; van Donselaar, E G; Müller, W H; Dorresteijn, B; Popov-Čeleketić, D; El Khattabi, M; Verrips, C T; van Bergen En Henegouwen, P M P; Post, J A

    2017-07-01

    Immuno-electron microscopy is commonly performed with the use of antibodies. In the last decade the antibody fragment indicated as nanobody (VHH or single domain antibody) has found its way to different applications previously done with conventional antibodies. Nanobodies can be selected to bind with high affinity and specificity to different antigens. They are small (molecular weight ca. 15kDa) and are usually easy to produce in microorganisms. Here we have evaluated the feasibility of a nanobody binding to HER2 for application in immuno-electron microscopy. To obtain highest labeling efficiency combined with optimal specificity, different labeling conditions were analysed, which included nanobody concentration, fixation and blocking conditions. The obtained optimal protocol was applied for post-embedment labeling of Tokuyasu cryosections and for pre-embedment labeling of HER2 for fluorescence microscopy and both transmission and scanning electron microscopy. We show that formaldehyde fixation after incubation with the anti-HER2 nanobody, improves labeling intensity. Among all tested blocking agents the best results were obtained with a mixture of cold water fish gelatine and acetylated bovine serum albumin, which prevented a-specific interactions causing background labeling while preserving specific interactions at the same time. In conclusion, we have developed a nanobody-based protocol for immuno-gold labeling of HER2 for Tokuyasu cryosections in TEM as well as for pre-embedment gold labeling of cells for both TEM and SEM. Copyright © 2017. Published by Elsevier Inc.

  19. A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.

    2013-04-15

    Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less

  20. Evaluation of Patient Handoff Methods on an Inpatient Teaching Service

    PubMed Central

    Craig, Steven R.; Smith, Hayden L.; Downen, A. Matthew; Yost, W. John

    2012-01-01

    Background The patient handoff process can be a highly variable and unstructured period at risk for communication errors. The morning sign-in process used by resident physicians at teaching hospitals typically involves less rigorous handoff protocols than the resident evening sign-out process. Little research has been conducted on best practices for handoffs during morning sign-in exchanges between resident physicians. Research must evaluate optimal protocols for the resident morning sign-in process. Methods Three morning handoff protocols consisting of written, electronic, and face-to-face methods were implemented over 3 study phases during an academic year. Study participants included all interns covering the internal medicine inpatient teaching service at a tertiary hospital. Study measures entailed intern survey-based interviews analyzed for failures in handoff protocols with or without missed pertinent information. Descriptive and comparative analyses examined study phase differences. Results A scheduled face-to-face handoff process had the fewest protocol deviations and demonstrated best communication of essential patient care information between cross-covering teams compared to written and electronic sign-in protocols. Conclusion Intern patient handoffs were more reliable when the sign-in protocol included scheduled face-to-face meetings. This method provided the best communication of patient care information and allowed for open exchanges of information. PMID:23267259

  1. A Natural Language Processing-based Model to Automate MRI Brain Protocol Selection and Prioritization.

    PubMed

    Brown, Andrew D; Marotta, Thomas R

    2017-02-01

    Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  2. Flow cytometry for enrichment and titration in massively parallel DNA sequencing

    PubMed Central

    Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim

    2009-01-01

    Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748

  3. Outcome and toxicity associated with a dose-intensified, maintenance-free CHOP-based chemotherapy protocol in canine lymphoma: 130 cases.

    PubMed

    Sorenmo, Karin; Overley, B; Krick, E; Ferrara, T; LaBlanc, A; Shofer, F

    2010-09-01

    A dose-intensified/dose-dense chemotherapy protocol for canine lymphoma was designed and implemented at the Veterinary Hospital of the University of Pennsylvania. In this study, we describe the clinical characteristics, prognostic factors, efficacy and toxicity in 130 dogs treated with this protocol. The majority of the dogs had advanced stage disease (63.1% stage V) and sub-stage b (58.5%). The median time to progression (TTP) and lymphoma-specific survival were 219 and 323 days, respectively. These results are similar to previous less dose-intense protocols. Sub-stage was a significant negative prognostic factor for survival. The incidence of toxicity was high; 53.9 and 45% of the dogs needed dose reductions and treatment delays, respectively. Dogs that required dose reductions and treatment delays had significantly longer TTP and lymphoma-specific survival times. These results suggest that dose density is important, but likely relative, and needs to be adjusted according to the individual patient's toxicity for optimal outcome.

  4. Implementing a Pro-forma for Multidisciplinary Management of an Enterocutaneous Fistula: A Case Study.

    PubMed

    Samad, Sohel; Anele, Chukwuemeka; Akhtar, Mansoor; Doughan, Samer

    2015-06-01

    Optimal management of patients with an entercocutaneous fistula (ECF) requires utilization of the sepsis, nutrition, anatomy, and surgical procedure (SNAP) protocol. The protocol includes early detection and treatment of sepsis, optimizing patient nutrition through oral and parenteral routes, identifying the fistula anatomy, optimal fistula management, and proceeding to corrective surgery when appropriate. The protocol requires multidisciplinary team (MDT) coordination among surgeons, nurses, dietitians, stoma nurses, and physiotherapists. This case study describes a 70-year-old man who developed an ECF subsequent to a laparotomy for a small bowel obstruction. Following a period of ileus, 16 days post laparotomy the patient developed a high-output (2,000 mL per day) fistula. The patient also became pyrexial with raised inflammatory markers, requiring antibiotic treatment. Following development of his ECF, he was managed using the SNAP protocol for the duration of his admission; however, in implementing this protocol with this patient, clinicians noted fluid charts were inadequate to allow effective management of the variables. Thus, a new pro-forma was created that encompassed fluid balance, nutritional status, and pertinent blood test results, as well as perifistular skin condition, medication, and documentation of management plans from the MDT team. The pro-forma was recorded daily in the patient notes. Following implementation of the pro-forma and the SNAP protocol, the patient recovered well clinically over a period of 4 weeks with a decrease in his fistula output to 300-500 mL per day, and he was discharged with plans for further corrective surgery to resect the fistula and for bowel re-anastomoses. Although fluid charts are readily available, they do not include all pertinent variables for optimal management of patients with an ECF. Further research is needed to validate the pro-forma and evaluate its effect on patient outcomes.

  5. Synthesis of Platinum-nickel Nanowires and Optimization for Oxygen Reduction Performance

    DOE PAGES

    Alia, Shaun M.; Pivovar, Bryan S.

    2018-01-01

    Platinum-nickel (Pt-Ni) nanowires were developed as fuel cell electrocatalysts, and were optimized for the performance and durability in the oxygen reduction reaction. Spontaneous galvanic displacement was used to deposit Pt layers onto Ni nanowire substrates. The synthesis approach produced catalysts with high specific activities and high Pt surface areas. Hydrogen annealing improved Pt and Ni mixing and specific activity. Acid leaching was used to preferentially remove Ni near the nanowire surface, and oxygen annealing was used to stabilize near-surface Ni, improving durability and minimizing Ni dissolution. These protocols detail the optimization of each post-synthesis processing step, including hydrogen annealing tomore » 250 degrees C, exposure to 0.1 M nitric acid, and oxygen annealing to 175 degrees C. Through these steps, Pt-Ni nanowires produced increased activities more than an order of magnitude than Pt nanoparticles, while offering significant durability improvements. The presented protocols are based on Pt-Ni systems in the development of fuel cell catalysts. Furthermore, these techniques have also been used for a variety of metal combinations, and can be applied to develop catalysts for a number of electrochemical processes.« less

  6. Improved identification of cranial nerves using paired-agent imaging: topical staining protocol optimization through experimentation and simulation

    NASA Astrophysics Data System (ADS)

    Torres, Veronica C.; Wilson, Todd; Staneviciute, Austeja; Byrne, Richard W.; Tichauer, Kenneth M.

    2018-03-01

    Skull base tumors are particularly difficult to visualize and access for surgeons because of the crowded environment and close proximity of vital structures, such as cranial nerves. As a result, accidental nerve damage is a significant concern and the likelihood of tumor recurrence is increased because of more conservative resections that attempt to avoid injuring these structures. In this study, a paired-agent imaging method with direct administration of fluorophores is applied to enhance cranial nerve identification. Here, a control imaging agent (ICG) accounts for non-specific uptake of the nerve-targeting agent (Oxazine 4), and ratiometric data analysis is employed to approximate binding potential (BP, a surrogate of targeted biomolecule concentration). For clinical relevance, animal experiments and simulations were conducted to identify parameters for an optimized stain and rinse protocol using the developed paired-agent method. Numerical methods were used to model the diffusive and kinetic behavior of the imaging agents in tissue, and simulation results revealed that there are various combinations of stain time and rinse number that provide improved contrast of cranial nerves, as suggested by optimal measures of BP and contrast-to-noise ratio.

  7. Optimizing the design of a reproduction toxicity test with the pond snail Lymnaea stagnalis.

    PubMed

    Charles, Sandrine; Ducrot, Virginie; Azam, Didier; Benstead, Rachel; Brettschneider, Denise; De Schamphelaere, Karel; Filipe Goncalves, Sandra; Green, John W; Holbech, Henrik; Hutchinson, Thomas H; Faber, Daniel; Laranjeiro, Filipe; Matthiessen, Peter; Norrgren, Leif; Oehlmann, Jörg; Reategui-Zirena, Evelyn; Seeland-Fremer, Anne; Teigeler, Matthias; Thome, Jean-Pierre; Tobor Kaplon, Marysia; Weltje, Lennart; Lagadic, Laurent

    2016-11-01

    This paper presents the results from two ring-tests addressing the feasibility, robustness and reproducibility of a reproduction toxicity test with the freshwater gastropod Lymnaea stagnalis (RENILYS strain). Sixteen laboratories (from inexperienced to expert laboratories in mollusc testing) from nine countries participated in these ring-tests. Survival and reproduction were evaluated in L. stagnalis exposed to cadmium, tributyltin, prochloraz and trenbolone according to an OECD draft Test Guideline. In total, 49 datasets were analysed to assess the practicability of the proposed experimental protocol, and to estimate the between-laboratory reproducibility of toxicity endpoint values. The statistical analysis of count data (number of clutches or eggs per individual-day) leading to ECx estimation was specifically developed and automated through a free web-interface. Based on a complementary statistical analysis, the optimal test duration was established and the most sensitive and cost-effective reproduction toxicity endpoint was identified, to be used as the core endpoint. This validation process and the resulting optimized protocol were used to consolidate the OECD Test Guideline for the evaluation of reproductive effects of chemicals in L. stagnalis. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Synthesis of Platinum-nickel Nanowires and Optimization for Oxygen Reduction Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alia, Shaun M.; Pivovar, Bryan S.

    Platinum-nickel (Pt-Ni) nanowires were developed as fuel cell electrocatalysts, and were optimized for the performance and durability in the oxygen reduction reaction. Spontaneous galvanic displacement was used to deposit Pt layers onto Ni nanowire substrates. The synthesis approach produced catalysts with high specific activities and high Pt surface areas. Hydrogen annealing improved Pt and Ni mixing and specific activity. Acid leaching was used to preferentially remove Ni near the nanowire surface, and oxygen annealing was used to stabilize near-surface Ni, improving durability and minimizing Ni dissolution. These protocols detail the optimization of each post-synthesis processing step, including hydrogen annealing tomore » 250 degrees C, exposure to 0.1 M nitric acid, and oxygen annealing to 175 degrees C. Through these steps, Pt-Ni nanowires produced increased activities more than an order of magnitude than Pt nanoparticles, while offering significant durability improvements. The presented protocols are based on Pt-Ni systems in the development of fuel cell catalysts. Furthermore, these techniques have also been used for a variety of metal combinations, and can be applied to develop catalysts for a number of electrochemical processes.« less

  9. Dispersion of Nanomaterials in Aqueous Media: Towards Protocol Optimization.

    PubMed

    Kaur, Inder; Ellis, Laura-Jayne; Romer, Isabella; Tantra, Ratna; Carriere, Marie; Allard, Soline; Mayne-L'Hermite, Martine; Minelli, Caterina; Unger, Wolfgang; Potthoff, Annegret; Rades, Steffi; Valsami-Jones, Eugenia

    2017-12-25

    The sonication process is commonly used for de-agglomerating and dispersing nanomaterials in aqueous based media, necessary to improve homogeneity and stability of the suspension. In this study, a systematic step-wise approach is carried out to identify optimal sonication conditions in order to achieve a stable dispersion. This approach has been adopted and shown to be suitable for several nanomaterials (cerium oxide, zinc oxide, and carbon nanotubes) dispersed in deionized (DI) water. However, with any change in either the nanomaterial type or dispersing medium, there needs to be optimization of the basic protocol by adjusting various factors such as sonication time, power, and sonicator type as well as temperature rise during the process. The approach records the dispersion process in detail. This is necessary to identify the time points as well as other above-mentioned conditions during the sonication process in which there may be undesirable changes, such as damage to the particle surface thus affecting surface properties. Our goal is to offer a harmonized approach that can control the quality of the final, produced dispersion. Such a guideline is instrumental in ensuring dispersion quality repeatability in the nanoscience community, particularly in the field of nanotoxicology.

  10. Dispersion of Nanomaterials in Aqueous Media: Towards Protocol Optimization

    PubMed Central

    Kaur, Inder; Ellis, Laura-Jayne; Romer, Isabella; Tantra, Ratna; Carriere, Marie; Allard, Soline; Mayne-L'Hermite, Martine; Minelli, Caterina; Unger, Wolfgang; Potthoff, Annegret; Rades, Steffi; Valsami-Jones, Eugenia

    2017-01-01

    The sonication process is commonly used for de-agglomerating and dispersing nanomaterials in aqueous based media, necessary to improve homogeneity and stability of the suspension. In this study, a systematic step-wise approach is carried out to identify optimal sonication conditions in order to achieve a stable dispersion. This approach has been adopted and shown to be suitable for several nanomaterials (cerium oxide, zinc oxide, and carbon nanotubes) dispersed in deionized (DI) water. However, with any change in either the nanomaterial type or dispersing medium, there needs to be optimization of the basic protocol by adjusting various factors such as sonication time, power, and sonicator type as well as temperature rise during the process. The approach records the dispersion process in detail. This is necessary to identify the time points as well as other above-mentioned conditions during the sonication process in which there may be undesirable changes, such as damage to the particle surface thus affecting surface properties. Our goal is to offer a harmonized approach that can control the quality of the final, produced dispersion. Such a guideline is instrumental in ensuring dispersion quality repeatability in the nanoscience community, particularly in the field of nanotoxicology. PMID:29364209

  11. Simple proof that Gaussian attacks are optimal among collective attacks against continuous-variable quantum key distribution with a Gaussian modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, University Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex

    2010-06-15

    In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.

  12. Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy

    PubMed Central

    Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.

    2017-01-01

    Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945

  13. Cross-layer protocol design for QoS optimization in real-time wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2010-04-01

    The metrics of quality of service (QoS) for each sensor type in a wireless sensor network can be associated with metrics for multimedia that describe the quality of fused information, e.g., throughput, delay, jitter, packet error rate, information correlation, etc. These QoS metrics are typically set at the highest, or application, layer of the protocol stack to ensure that performance requirements for each type of sensor data are satisfied. Application-layer metrics, in turn, depend on the support of the lower protocol layers: session, transport, network, data link (MAC), and physical. The dependencies of the QoS metrics on the performance of the higher layers of the Open System Interconnection (OSI) reference model of the WSN protocol, together with that of the lower three layers, are the basis for a comprehensive approach to QoS optimization for multiple sensor types in a general WSN model. The cross-layer design accounts for the distributed power consumption along energy-constrained routes and their constituent nodes. Following the author's previous work, the cross-layer interactions in the WSN protocol are represented by a set of concatenated protocol parameters and enabling resource levels. The "best" cross-layer designs to achieve optimal QoS are established by applying the general theory of martingale representations to the parameterized multivariate point processes (MVPPs) for discrete random events occurring in the WSN. Adaptive control of network behavior through the cross-layer design is realized through the parametric factorization of the stochastic conditional rates of the MVPPs. The cross-layer protocol parameters for optimal QoS are determined in terms of solutions to stochastic dynamic programming conditions derived from models of transient flows for heterogeneous sensor data and aggregate information over a finite time horizon. Markov state processes, embedded within the complex combinatorial history of WSN events, are more computationally tractable and lead to simplifications for any simulated or analytical performance evaluations of the cross-layer designs.

  14. Whole brain inhomogeneous magnetization transfer (ihMT) imaging: Sensitivity enhancement within a steady-state gradient echo sequence.

    PubMed

    Mchinda, Samira; Varma, Gopal; Prevost, Valentin H; Le Troter, Arnaud; Rapacchi, Stanislas; Guye, Maxime; Pelletier, Jean; Ranjeva, Jean-Philippe; Alsop, David C; Duhamel, Guillaume; Girard, Olivier M

    2018-05-01

    To implement, characterize, and optimize an interleaved inhomogeneous magnetization transfer (ihMT) gradient echo sequence allowing for whole-brain imaging within a clinically compatible scan time. A general framework for ihMT modelling was developed based on the Provotorov theory of radiofrequency saturation, which accounts for the dipolar order underpinning the ihMT effect. Experimental studies and numerical simulations were performed to characterize and optimize the ihMT-gradient echo dependency with sequence timings, saturation power, and offset frequency. The protocol was optimized in terms of maximum signal intensity and the reproducibility assessed for a nominal resolution of 1.5 mm isotropic. All experiments were performed on healthy volunteers at 1.5T. An important mechanism driving signal optimization and leading to strong ihMT signal enhancement that relies on the dynamics of radiofrequency energy deposition has been identified. By taking advantage of the delay allowed for readout between ihMT pulse bursts, it was possible to boost the ihMT signal by almost 2-fold compared to previous implementation. Reproducibility of the optimal protocol was very good, with an intra-individual error < 2%. The proposed sensitivity-boosted and time-efficient steady-state ihMT-gradient echo sequence, implemented and optimized at 1.5T, allowed robust high-resolution 3D ihMT imaging of the whole brain within a clinically compatible scan time. Magn Reson Med 79:2607-2619, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Dosage optimization in positron emission tomography: state-of-the-art methods and future prospects

    PubMed Central

    Karakatsanis, Nicolas A; Fokou, Eleni; Tsoumpas, Charalampos

    2015-01-01

    Positron emission tomography (PET) is widely used nowadays for tumor staging and therapy response in the clinic. However, average PET radiation exposure has increased due to higher PET utilization. This study aims to review state-of-the-art PET tracer dosage optimization methods after accounting for the effects of human body attenuation and scan protocol parameters on the counting rate. In particular, the relationship between the noise equivalent count rate (NECR) and the dosage (NECR-dosage curve) for a range of clinical PET systems and body attenuation sizes will be systematically studied to prospectively estimate the minimum dosage required for sufficiently high NECR. The optimization criterion can be determined either as a function of the peak of the NECR-dosage curve or as a fixed NECR score when NECR uniformity across a patient population is important. In addition, the systematic NECR assessments within a controllable environment of realistic simulations and phantom experiments can lead to a NECR-dosage response model, capable of predicting the optimal dosage for every individual PET scan. Unlike conventional guidelines suggesting considerably large dosage levels for obese patients, NECR-based optimization recommends: i) moderate dosage to achieve 90% of peak NECR for obese patients, ii) considerable dosage reduction for slimmer patients such that uniform NECR is attained across the patient population, and iii) prolongation of scans for PET/MR protocols, where longer PET acquisitions are affordable due to lengthy MR sequences, with motion compensation becoming important then. Finally, the need for continuous adaptation of dosage optimization to emerging technologies will be discussed. PMID:26550543

  16. Capillary electrophoretic separation-based approach to determine the labeling kinetics of oligodeoxynucleotides

    PubMed Central

    Kanavarioti, Anastassia; Greenman, Kevin L.; Hamalainen, Mark; Jain, Aakriti; Johns, Adam M.; Melville, Chris R.; Kemmish, Kent; Andregg, William

    2014-01-01

    With the recent advances in electron microscopy (EM), computation, and nanofabrication, the original idea of reading DNA sequence directly from an image can now be tested. One approach is to develop heavy atom labels that can provide the contrast required for EM imaging. While evaluating tentative labels for the respective nucleobases in synthetic oligodeoxynucleotides (oligos), we developed a streamlined capillary electrophoresis (CE) protocol to assess the label stability, reactivity, and selectivity. We report our protocol using osmium tetroxide 2,2′-bipyridine (Osbipy) as a thymidine (T) specific label. The observed rates show that the labeling process is kinetically independent of both the oligo length, and the base composition. The conditions, i.e. temperature, optimal Osbipy concentration, and molar ratio of reagents, to promote 100% conversion of the starting oligo to labeled product were established. Hence the optimized conditions developed with the oligos could be leveraged to allow osmylation of effectively all Ts in single-stranded (ss) DNA, while achieving minimal mislabeling. In addition, the approach and methods employed here may be adapted to the evaluation of other prospective contrasting agents/labels to facilitate next-generation DNA sequencing by EM. PMID:23147698

  17. Effects of maternally-derived antibodies on serologic responses to vaccination in kittens.

    PubMed

    Digangi, Brian A; Levy, Julie K; Griffin, Brenda; Reese, Michael J; Dingman, Patricia A; Tucker, Sylvia J; Dubovi, Edward J

    2012-02-01

    The optimal vaccination protocol to induce immunity in kittens with maternal antibodies is unknown. The objective of this study was to determine the effects of maternally-derived antibody (MDA) on serologic responses to vaccination in kittens. Vaccination with a modified live virus (MLV) product was more effective than an inactivated (IA) product at inducing protective antibody titers (PAT) against feline panleukopenia virus (FPV). IA vaccination against feline herpesvirus-1 (FHV) and feline calicivirus (FCV) was more effective in the presence of low MDA than high MDA. Among kittens with low MDA, MLV vaccination against FCV was more effective than IA vaccination. A total of 15%, 44% and 4% of kittens had insufficient titers against FPV, FHV and FCV, respectively, at 17 weeks of age. Serologic response to vaccination of kittens varies based on vaccination type and MDA level. In most situations, MLV vaccination should be utilized and protocols continued beyond 14 weeks of age to optimize response by all kittens.

  18. Nanoparticle-based photodynamic therapy on non-melanoma skin cancer

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, F.; Arce-Diego, J. L.

    2018-02-01

    There are several advantages of Photodynamic Therapy (PDT) for nonmelanoma skin cancer treatment compared to conventional treatment techniques such as surgery, radiotherapy or chemotherapy. Among these advantages its noninvasive nature, the use of non ionizing radiation and its high selectivity can be mentioned. Despite all these advantages, the therapeutic efficiency of the current clinical protocol is not complete in all the patients and depends on the type of pathology. An adequate dosimetry is needed in order to personalize the protocol. There are strategies that try to overcome the current PDT shortcomings, such as the improvement of the photosensitizer accumulation in the target tissue, optical radiation distribution optimization or photochemical reactions maximization. These strategies can be further complemented by the use of nanostructures with conventional PDT. Customized dosimetry for nanoparticle-based PDT requires models in order to adjust parameters of different nature to get an optimal tumor removal. In this work, a predictive model of nanoparticle-based PDT is proposed and analyzed. Dosimetry in nanoparticle-based PDT is going to be influenced by photosensitizer-nanoparticle distribution in the malignant tissue, its influence in the optical radiation distribution and the subsequent photochemical reactions. Nanoparticles are considered as photosensitizer carriers on several types of non-melanoma skin cancer. Shielding effects are taken into account. The results allow to compare the estimated treatment outcome with and without nanoparticles.

  19. Computational Methods Used in Hit-to-Lead and Lead Optimization Stages of Structure-Based Drug Discovery.

    PubMed

    Heifetz, Alexander; Southey, Michelle; Morao, Inaki; Townsend-Nicholson, Andrea; Bodkin, Mike J

    2018-01-01

    GPCR modeling approaches are widely used in the hit-to-lead (H2L) and lead optimization (LO) stages of drug discovery. The aims of these modeling approaches are to predict the 3D structures of the receptor-ligand complexes, to explore the key interactions between the receptor and the ligand and to utilize these insights in the design of new molecules with improved binding, selectivity or other pharmacological properties. In this book chapter, we present a brief survey of key computational approaches integrated with hierarchical GPCR modeling protocol (HGMP) used in hit-to-lead (H2L) and in lead optimization (LO) stages of structure-based drug discovery (SBDD). We outline the differences in modeling strategies used in H2L and LO of SBDD and illustrate how these tools have been applied in three drug discovery projects.

  20. Enriching peptide libraries for binding affinity and specificity through computationally directed library design

    PubMed Central

    Foight, Glenna Wink; Chen, T. Scott; Richman, Daniel; Keating, Amy E.

    2017-01-01

    Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model. PMID:28236241

  1. Enriching Peptide Libraries for Binding Affinity and Specificity Through Computationally Directed Library Design.

    PubMed

    Foight, Glenna Wink; Chen, T Scott; Richman, Daniel; Keating, Amy E

    2017-01-01

    Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model.

  2. Optimizing a dynamical decoupling protocol for solid-state electronic spin ensembles in diamond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farfurnik, D.; Jarmola, A.; Pham, L. M.

    2015-08-24

    In this study, we demonstrate significant improvements of the spin coherence time of a dense ensemble of nitrogen-vacancy (NV) centers in diamond through optimized dynamical decoupling (DD). Cooling the sample down to 77 K suppresses longitudinal spin relaxation T 1 effects and DD microwave pulses are used to increase the transverse coherence time T 2 from ~0.7ms up to ~30ms. Furthermore, we extend previous work of single-axis (Carr-Purcell-Meiboom-Gill) DD towards the preservation of arbitrary spin states. Following a theoretical and experimental characterization of pulse and detuning errors, we compare the performance of various DD protocols. We also identify that themore » optimal control scheme for preserving an arbitrary spin state is a recursive protocol, the concatenated version of the XY8 pulse sequence. The improved spin coherence might have an immediate impact on improvements of the sensitivities of ac magnetometry. Moreover, the protocol can be used on denser diamond samples to increase coherence times up to NV-NV interaction time scales, a major step towards the creation of quantum collective NV spin states.« less

  3. Magnetic resonance imaging protocols for examination of the neurocranium at 3 T.

    PubMed

    Schwindt, W; Kugel, H; Bachmann, R; Kloska, S; Allkemper, T; Maintz, D; Pfleiderer, B; Tombach, B; Heindel, W

    2003-09-01

    The increasing availability of high-field (3 T) MR scanners requires adapting and optimizing clinical imaging protocols to exploit the theoretically higher signal-to-noise ratio (SNR) of the higher field strength. Our aim was to establish reliable and stable protocols meeting the clinical demands for imaging the neurocranium at 3 T. Two hundred patients with a broad range of indications received an examination of the neurocranium with an appropriate assortment of imaging techniques at 3 T. Several imaging parameters were optimized. Keeping scan times comparable to those at 1.5 T we increased spatial resolution. Contrast-enhanced and non-enhanced T1-weighted imaging was best applying gradient-echo and inversion recovery (rather than spin-echo) techniques, respectively. For fluid-attenuated inversion recovery (FLAIR) imaging a TE of 120 ms yielded optimum contrast-to-noise ratio (CNR). High-resolution isotropic 3D data sets were acquired within reasonable scan times. Some artifacts were pronounced, but generally imaging profited from the higher SNR. We present a set of optimized examination protocols for neuroimaging at 3 T, which proved to be reliable in a clinical routine setting.

  4. Design and Practical Evaluation of a Family of Lightweight Protocols for Heterogeneous Sensing through BLE Beacons in IoT Telemetry Applications.

    PubMed

    Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J

    2017-12-27

    The Internet of Things (IoT) involves a wide variety of heterogeneous technologies and resource-constrained devices that interact with each other. Due to such constraints, IoT devices usually require lightweight protocols that optimize the use of resources and energy consumption. Among the different commercial IoT devices, Bluetooth and Bluetooth Low Energy (BLE)-based beacons, which broadcast periodically certain data packets to notify their presence, have experienced a remarkable growth, specially due to their application in indoor positioning systems. This article proposes a family of protocols named Lightweight Protocol for Sensors (LP4S) that provides fast responses and enables plug-and-play mechanisms that allow IoT telemetry systems to discover new nodes and to describe and auto-register the sensors and actuators connected to a beacon. Thus, three protocols are defined depending on the beacon hardware characteristics: LP4S-6 (for resource-constraint beacons), LP4S-X (for more powerful beacons) and LP4S-J (for beacons able to run complex firmware). In order to demonstrate the capabilities of the designed protocols, the most restrictive (LP4S-6) is tested after implementing it for a telemetry application in a beacon based on Eddystone (Google's open beacon format). Thus, the beacon specification is extended in order to increase its ability to manage unlimited sensors in a telemetry system without interfering in its normal operation with Eddystone frames. The performed experiments show the feasibility of the proposed solution and its superiority, in terms of latency and energy consumption, with respect to approaches based on Generic Attribute Profile (GATT) when multiple users connect to a mote or in scenarios where latency is not a restriction, but where low-energy consumption is essential.

  5. Technical Note: Independent component analysis for quality assurance in functional MRI.

    PubMed

    Astrakas, Loukas G; Kallistis, Nikolaos S; Kalef-Ezra, John A

    2016-02-01

    Independent component analysis (ICA) is an established method of analyzing human functional MRI (fMRI) data. Here, an ICA-based fMRI quality control (QC) tool was developed and used. ICA-based fMRI QC tool to be used with a commercial phantom was developed. In an attempt to assess the performance of the tool relative to preexisting alternative tools, it was used seven weeks before and eight weeks after repair of a faulty gradient amplifier of a non-state-of-the-art MRI unit. More specifically, its performance was compared with the AAPM 100 acceptance testing and quality assurance protocol and two fMRI QC protocols, proposed by Freidman et al. ["Report on a multicenter fMRI quality assurance protocol," J. Magn. Reson. Imaging 23, 827-839 (2006)] and Stocker et al. ["Automated quality assurance routines for fMRI data applied to a multicenter study," Hum. Brain Mapp. 25, 237-246 (2005)], respectively. The easily developed and applied ICA-based QC protocol provided fMRI QC indices and maps equally sensitive to fMRI instabilities with the indices and maps of other established protocols. The ICA fMRI QC indices were highly correlated with indices of other fMRI QC protocols and in some cases theoretically related to them. Three or four independent components with slow varying time series are detected under normal conditions. ICA applied on phantom measurements is an easy and efficient tool for fMRI QC. Additionally, it can protect against misinterpretations of artifact components as human brain activations. Evaluating fMRI QC indices in the central region of a phantom is not always the optimal choice.

  6. Design and Practical Evaluation of a Family of Lightweight Protocols for Heterogeneous Sensing through BLE Beacons in IoT Telemetry Applications

    PubMed Central

    2017-01-01

    The Internet of Things (IoT) involves a wide variety of heterogeneous technologies and resource-constrained devices that interact with each other. Due to such constraints, IoT devices usually require lightweight protocols that optimize the use of resources and energy consumption. Among the different commercial IoT devices, Bluetooth and Bluetooth Low Energy (BLE)-based beacons, which broadcast periodically certain data packets to notify their presence, have experienced a remarkable growth, specially due to their application in indoor positioning systems. This article proposes a family of protocols named Lightweight Protocol for Sensors (LP4S) that provides fast responses and enables plug-and-play mechanisms that allow IoT telemetry systems to discover new nodes and to describe and auto-register the sensors and actuators connected to a beacon. Thus, three protocols are defined depending on the beacon hardware characteristics: LP4S-6 (for resource-constraint beacons), LP4S-X (for more powerful beacons) and LP4S-J (for beacons able to run complex firmware). In order to demonstrate the capabilities of the designed protocols, the most restrictive (LP4S-6) is tested after implementing it for a telemetry application in a beacon based on Eddystone (Google’s open beacon format). Thus, the beacon specification is extended in order to increase its ability to manage unlimited sensors in a telemetry system without interfering in its normal operation with Eddystone frames. The performed experiments show the feasibility of the proposed solution and its superiority, in terms of latency and energy consumption, with respect to approaches based on Generic Attribute Profile (GATT) when multiple users connect to a mote or in scenarios where latency is not a restriction, but where low-energy consumption is essential. PMID:29280975

  7. SU-F-R-11: Designing Quality and Safety Informatics Through Implementation of a CT Radiation Dose Monitoring Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, JM; Samei, E; Departments of Physics, Electrical and Computer Engineering, and Biomedical Engineering, and Medical Physics Graduate Program, Duke University, Durham, NC

    2016-06-15

    Purpose: Recent legislative and accreditation requirements have driven rapid development and implementation of CT radiation dose monitoring solutions. Institutions must determine how to improve quality, safety, and consistency of their clinical performance. The purpose of this work was to design a strategy and meaningful characterization of results from an in-house, clinically-deployed dose monitoring solution. Methods: A dose monitoring platform was designed by our imaging physics group that focused on extracting protocol parameters, dose metrics, and patient demographics and size. Compared to most commercial solutions, which focus on individual exam alerts and global thresholds, the program sought to characterize overall consistencymore » and targeted thresholds based on eight analytic interrogations. Those were based on explicit questions related to protocol application, national benchmarks, protocol and size-specific dose targets, operational consistency, outliers, temporal trends, intra-system variability, and consistent use of electronic protocols. Using historical data since the start of 2013, 95% and 99% intervals were used to establish yellow and amber parameterized dose alert thresholds, respectively, as a function of protocol, scanner, and size. Results: Quarterly reports have been generated for three hospitals for 3 quarters of 2015 totaling 27880, 28502, 30631 exams, respectively. Four adult and two pediatric protocols were higher than external institutional benchmarks. Four protocol dose levels were being inconsistently applied as a function of patient size. For the three hospitals, the minimum and maximum amber outlier percentages were [1.53%,2.28%], [0.76%,1.8%], [0.94%,1.17%], respectively. Compared with the electronic protocols, 10 protocols were found to be used with some inconsistency. Conclusion: Dose monitoring can satisfy requirements with global alert thresholds and patient dose records, but the real value is in optimizing patient-specific protocols, balancing image quality trade-offs that dose-reduction strategies promise, and improving the performance and consistency of a clinical operation. Data plots that capture patient demographics and scanner performance demonstrate that value.« less

  8. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    DOEpatents

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  9. Optimization and validation of a fast amplification protocol for AmpFlSTR® Profiler Plus® for rapid forensic human identification.

    PubMed

    Laurin, Nancy; Frégeau, Chantal

    2012-01-01

    The goal of this work was to optimize and validate a fast amplification protocol for the multiplex amplification of the STR loci included in AmpFlSTR(®) Profiler Plus(®) to expedite human DNA identification. By modifying the cycling conditions and by combining the use of a DNA polymerase optimized for high speed PCR (SpeedSTAR™ HS) and a more efficient thermal cycler instrument (Bio-RAD C1000™), we were able to reduce the amplification process from 4h to 26 min. No modification to the commercial AmpFlSTR(®) Profiler Plus(®) primer mix was required. When compared to the current Royal Canadian Mounted Police (RCMP) amplification protocol, no differences with regards to specificity, sensitivity, heterozygote peak height ratios and overall profile balance were noted. Moreover, complete concordance was obtained with profiles previously generated with the standard amplification protocol and minor alleles in mixture samples were reliably typed. An increase in n-4 stutter ratios (2.2% on average for all loci) was observed for profiles amplified with the fast protocol compared to the current procedure. Our results document the robustness of this rapid amplification protocol for STR profiling using the AmpFlSTR(®) Profiler Plus(®) primer set and demonstrate that comparable data can be obtained in substantially less time. This new approach could provide an alternative option to current multiplex STR typing amplification protocols in order to increase throughput or expedite time-sensitive cases. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Stent deployment protocol for optimized real-time visualization during endovascular neurosurgery.

    PubMed

    Silva, Michael A; See, Alfred P; Dasenbrock, Hormuzdiyar H; Ashour, Ramsey; Khandelwal, Priyank; Patel, Nirav J; Frerichs, Kai U; Aziz-Sultan, Mohammad A

    2017-05-01

    Successful application of endovascular neurosurgery depends on high-quality imaging to define the pathology and the devices as they are being deployed. This is especially challenging in the treatment of complex cases, particularly in proximity to the skull base or in patients who have undergone prior endovascular treatment. The authors sought to optimize real-time image guidance using a simple algorithm that can be applied to any existing fluoroscopy system. Exposure management (exposure level, pulse management) and image post-processing parameters (edge enhancement) were modified from traditional fluoroscopy to improve visualization of device position and material density during deployment. Examples include the deployment of coils in small aneurysms, coils in giant aneurysms, the Pipeline embolization device (PED), the Woven EndoBridge (WEB) device, and carotid artery stents. The authors report on the development of the protocol and their experience using representative cases. The stent deployment protocol is an image capture and post-processing algorithm that can be applied to existing fluoroscopy systems to improve real-time visualization of device deployment without hardware modifications. Improved image guidance facilitates aneurysm coil packing and proper positioning and deployment of carotid artery stents, flow diverters, and the WEB device, especially in the context of complex anatomy and an obscured field of view.

  11. Refined protocols of tamoxifen injection for inducible DNA recombination in mouse astroglia.

    PubMed

    Jahn, Hannah M; Kasakow, Carmen V; Helfer, Andreas; Michely, Julian; Verkhratsky, Alexei; Maurer, Hans H; Scheller, Anja; Kirchhoff, Frank

    2018-04-12

    Inducible DNA recombination of floxed alleles in vivo by liver metabolites of tamoxifen (TAM) is an important tool to study gene functions. Here, we describe protocols for optimal DNA recombination in astrocytes, based on the GLAST-Cre ERT2 /loxP system. In addition, we demonstrate that quantification of genomic recombination allows to determine the proportion of cell types in various brain regions. We analyzed the presence and clearance of TAM and its metabolites (N-desmethyl-tamoxifen, 4-hydroxytamoxifen and endoxifen) in brain and serum of mice by liquid chromatographic-high resolution-tandem mass spectrometry (LC-HR-MS/MS) and assessed optimal injection protocols by quantitative RT-PCR of several floxed target genes (p2ry1, gria1, gabbr1 and Rosa26-tdTomato locus). Maximal recombination could be achieved in cortex and cerebellum by single daily injections for five and three consecutive days, respectively. Furthermore, quantifying the loss of floxed alleles predicted the percentage of GLAST-positive cells (astroglia) per brain region. We found that astrocytes contributed 20 to 30% of the total cell number in cortex, hippocampus, brainstem and optic nerve, while in the cerebellum Bergmann glia, velate astrocytes and white matter astrocytes accounted only for 8% of all cells.

  12. An Efficient Electroporation Protocol for the Genetic Modification of Mammalian Cells

    PubMed Central

    Chicaybam, Leonardo; Barcelos, Camila; Peixoto, Barbara; Carneiro, Mayra; Limia, Cintia Gomez; Redondo, Patrícia; Lira, Carla; Paraguassú-Braga, Flávio; Vasconcelos, Zilton Farias Meira De; Barros, Luciana; Bonamino, Martin Hernán

    2017-01-01

    Genetic modification of cell lines and primary cells is an expensive and cumbersome approach, often involving the use of viral vectors. Electroporation using square-wave generating devices, like Lonza’s Nucleofector, is a widely used option, but the costs associated with the acquisition of electroporation kits and the transient transgene expression might hamper the utility of this methodology. In the present work, we show that our in-house developed buffers, termed Chicabuffers, can be efficiently used to electroporate cell lines and primary cells from murine and human origin. Using the Nucleofector II device, we electroporated 14 different cell lines and also primary cells, like mesenchymal stem cells and cord blood CD34+, providing optimized protocols for each of them. Moreover, when combined with sleeping beauty-based transposon system, long-term transgene expression could be achieved in all types of cells tested. Transgene expression was stable and did not interfere with CD34+ differentiation to committed progenitors. We also show that these buffers can be used in CRISPR-mediated editing of PDCD1 gene locus in 293T and human peripheral blood mononuclear cells. The optimized protocols reported in this study provide a suitable and cost-effective platform for the genetic modification of cells, facilitating the widespread adoption of this technology. PMID:28168187

  13. Optimal Sensor Placement for Measuring Physical Activity with a 3D Accelerometer

    PubMed Central

    Boerema, Simone T.; van Velsen, Lex; Schaake, Leendert; Tönis, Thijs M.; Hermens, Hermie J.

    2014-01-01

    Accelerometer-based activity monitors are popular for monitoring physical activity. In this study, we investigated optimal sensor placement for increasing the quality of studies that utilize accelerometer data to assess physical activity. We performed a two-staged study, focused on sensor location and type of mounting. Ten subjects walked at various walking speeds on a treadmill, performed a deskwork protocol, and walked on level ground, while simultaneously wearing five ProMove2 sensors with a snug fit on an elastic waist belt. We found that sensor location, type of activity, and their interaction-effect affected sensor output. The most lateral positions on the waist belt were the least sensitive for interference. The effect of mounting was explored, by making two subjects repeat the experimental protocol with sensors more loosely fitted to the elastic belt. The loose fit resulted in lower sensor output, except for the deskwork protocol, where output was higher. In order to increase the reliability and to reduce the variability of sensor output, researchers should place activity sensors on the most lateral position of a participant's waist belt. If the sensor hampers free movement, it may be positioned slightly more forward on the belt. Finally, sensors should be fitted tightly to the body. PMID:24553085

  14. Standardisation of neonatal clinical practice.

    PubMed

    Bhutta, Z A; Giuliani, F; Haroon, A; Knight, H E; Albernaz, E; Batra, M; Bhat, B; Bertino, E; McCormick, K; Ochieng, R; Rajan, V; Ruyan, P; Cheikh Ismail, L; Paul, V

    2013-09-01

    The International Fetal and Newborn Growth Consortium for the 21(st) Century (INTERGROWTH-21(st) ) is a large-scale, population-based, multicentre project involving health institutions from eight geographically diverse countries, which aims to assess fetal, newborn and preterm growth under optimal conditions. Given the multicentre nature of the project and the expected number of preterm births, it is vital that all centres follow the same standardised clinical care protocols to assess and manage preterm infants, so as to ensure maximum validity of the resulting standards as indicators of growth and nutrition with minimal confounding. Moreover, it is well known that evidence-based clinical practice guidelines can reduce the delivery of inappropriate care and support the introduction of new knowledge into clinical practice. The INTERGROWTH-21(st) Neonatal Group produced an operations manual, which reflects the consensus reached by members of the group regarding standardised definitions of neonatal morbidities and the minimum standards of care to be provided by all centres taking part in the project. The operational definitions and summary management protocols were developed by consensus through a Delphi process based on systematic reviews of relevant guidelines and management protocols by authoritative bodies. This paper describes the process of developing the Basic Neonatal Care Manual, as well as the morbidity definitions and standardised neonatal care protocols applied across all the INTERGROWTH-21(st) participating centres. Finally, thoughts about implementation strategies are presented. © 2013 Royal College of Obstetricians and Gynaecologists.

  15. NREL, Mercedes-Benz Optimizing Refueling Experience for Fuel Cell Electric

    Science.gov Websites

    optimize the customer refueling experience for fuel cell electric vehicles. Photo of a Mercedes-Benz B fueling protocols, with an eye toward optimizing the refueling station's customer interface, making the

  16. Absorbable energy monitoring scheme: new design protocol to test vehicle structural crashworthiness.

    PubMed

    Ofochebe, Sunday M; Enibe, Samuel O; Ozoegwu, Chigbogu G

    2016-05-01

    In vehicle crashworthiness design optimization detailed system evaluation capable of producing reliable results are basically achieved through high-order numerical computational (HNC) models such as the dynamic finite element model, mesh-free model etc. However the application of these models especially during optimization studies is basically challenged by their inherent high demand on computational resources, conditional stability of the solution process, and lack of knowledge of viable parameter range for detailed optimization studies. The absorbable energy monitoring scheme (AEMS) presented in this paper suggests a new design protocol that attempts to overcome such problems in evaluation of vehicle structure for crashworthiness. The implementation of the AEMS involves studying crash performance of vehicle components at various absorbable energy ratios based on a 2DOF lumped-mass-spring (LMS) vehicle impact model. This allows for prompt prediction of useful parameter values in a given design problem. The application of the classical one-dimensional LMS model in vehicle crash analysis is further improved in the present work by developing a critical load matching criterion which allows for quantitative interpretation of the results of the abstract model in a typical vehicle crash design. The adequacy of the proposed AEMS for preliminary vehicle crashworthiness design is demonstrated in this paper, however its extension to full-scale design-optimization problem involving full vehicle model that shows greater structural detail requires more theoretical development.

  17. Optimal continuous variable quantum teleportation protocol for realistic settings

    NASA Astrophysics Data System (ADS)

    Luiz, F. S.; Rigolin, Gustavo

    2015-03-01

    We show the optimal setup that allows Alice to teleport coherent states | α > to Bob giving the greatest fidelity (efficiency) when one takes into account two realistic assumptions. The first one is the fact that in any actual implementation of the continuous variable teleportation protocol (CVTP) Alice and Bob necessarily share non-maximally entangled states (two-mode finitely squeezed states). The second one assumes that Alice's pool of possible coherent states to be teleported to Bob does not cover the whole complex plane (| α | < ∞). The optimal strategy is achieved by tuning three parameters in the original CVTP, namely, Alice's beam splitter transmittance and Bob's displacements in position and momentum implemented on the teleported state. These slight changes in the protocol are currently easy to be implemented and, as we show, give considerable gain in performance for a variety of possible pool of input states with Alice.

  18. Using Green Star Metrics to Optimize the Greenness of Literature Protocols for Syntheses

    ERIC Educational Resources Information Center

    Duarte, Rita C. C.; Ribeiro, M. Gabriela T. C.; Machado, Adélio A. S. C.

    2015-01-01

    A procedure to improve the greenness of a synthesis, without performing laboratory work, using alternative protocols available in the literature is presented. The greenness evaluation involves the separate assessment of the different steps described in the available protocols--reaction, isolation, and purification--as well as the global process,…

  19. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.

    PubMed

    Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio

    2015-08-28

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol.

  20. Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horoshko, D B

    2007-12-31

    The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)

  1. Characterization of technological features of dry yeast (strain I-7-43) preparation, product of electrofusion between Saccharomyces cerevisiae and Saccharomyces diastaticus, in industrial application.

    PubMed

    Kotarska, Katarzyna; Kłosowski, Grzegorz; Czupryński, Bogusław

    2011-06-10

    The aim of the study was to verify the technological usability and stability of biotechnological features of active dry distillery yeast preparation (strain I-7-43 with amylolytic abilities) applied to full-scale production of agricultural distillery. Various reduced doses of glucoamylase preparation (San-Extra L) were used for starch saccharification, from 90% to 70% in relation to the full standard dose of preparation. The dry distillery yeast I-7-43 were assessed positively in respect to fermentation activity and yield of ethanol production. Application of the dry yeast I-7-43 preparation in distillery practice lowers the costs of spirit production by saving the glucoamylase preparation (up to 30%) used in the process of mash saccharification. Concentrations of the volatile fermentation by-products in raw spirits obtained from fermentations with application of I-7-43 strain were on the levels guaranteeing good organoleptic properties of distillates. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Chromoendoscopy in magnetically guided capsule endoscopy

    PubMed Central

    2013-01-01

    Background Diagnosis of intestinal metaplasia and dysplasia via conventional endoscopy is characterized by low interobserver agreement and poor correlation with histopathologic findings. Chromoendoscopy significantly enhances the visibility of mucosa irregularities, like metaplasia and dysplasia mucosa. Magnetically guided capsule endoscopy (MGCE) offers an alternative technology for upper GI examination. We expect the difficulties of diagnosis of neoplasm in conventional endoscopy to transfer to MGCE. Thus, we aim to chart a path for the application of chromoendoscopy on MGCE via an ex-vivo animal study. Methods We propose a modified preparation protocol which adds a staining step to the existing MGCE preparation protocol. An optimal staining concentration is quantitatively determined for different stain types and pathologies. To that end 190 pig stomach tissue samples with and without lesion imitations were stained with different dye concentrations. Quantitative visual criteria are introduced to measure the quality of the staining with respect to mucosa and lesion visibility. Thusly determined optimal concentrations are tested in an ex-vivo pig stomach experiment under magnetic guidance of an endoscopic capsule with the modified protocol. Results We found that the proposed protocol modification does not impact the visibility in the stomach or steerability of the endoscopy capsule. An average optimal staining concentration for the proposed protocol was found at 0.4% for Methylene blue and Indigo carmine. The lesion visibility is improved using the previously obtained optimal dye concentration. Conclusions We conclude that chromoendoscopy may be applied in MGCE and improves mucosa and lesion visibility. Systematic evaluation provides important information on appropriate staining concentration. However, further animal and human in-vivo studies are necessary. PMID:23758801

  3. Analyzing pERK Activation During Planarian Regeneration.

    PubMed

    Fraguas, Susanna; Umesono, Yoshihiko; Agata, Kiyokazu; Cebrià, Francesc

    2017-01-01

    Planarians are an ideal model in which to study stem cell-based regeneration. After amputation, planarian pluripotent stem cells surrounding the wound proliferate to produce the regenerative blastema, in which they differentiate into the missing tissues and structures. Recent independent studies in planarians have shown that Smed-egfr-3, a gene encoding a homologue of epidermal growth factor (EGF) receptors, and DjerkA, which encodes an extracellular signal-regulated kinase (ERK), may control cell differentiation and blastema growth. However, because these studies were carried in two different planarian species, the relationship between these two genes remains unclear. We have optimized anti-pERK immunostaining in Schmidtea mediterranea using the original protocol developed in Dugesia japonica. Both protocols are reported here as most laboratories worldwide work with one of these two species. Using this protocol we have determined that Smed-egfr-3 appears to be necessary for pERK activation during planarian regeneration.

  4. Capacity estimation and verification of quantum channels with arbitrarily correlated errors.

    PubMed

    Pfister, Corsin; Rol, M Adriaan; Mantri, Atul; Tomamichel, Marco; Wehner, Stephanie

    2018-01-02

    The central figure of merit for quantum memories and quantum communication devices is their capacity to store and transmit quantum information. Here, we present a protocol that estimates a lower bound on a channel's quantum capacity, even when there are arbitrarily correlated errors. One application of these protocols is to test the performance of quantum repeaters for transmitting quantum information. Our protocol is easy to implement and comes in two versions. The first estimates the one-shot quantum capacity by preparing and measuring in two different bases, where all involved qubits are used as test qubits. The second verifies on-the-fly that a channel's one-shot quantum capacity exceeds a minimal tolerated value while storing or communicating data. We discuss the performance using simple examples, such as the dephasing channel for which our method is asymptotically optimal. Finally, we apply our method to a superconducting qubit in experiment.

  5. A general protocol for creating high-throughput screening assays for reaction yield and enantiomeric excess applied to hydrobenzoin

    PubMed Central

    Shabbir, Shagufta H.; Regan, Clinton J.; Anslyn, Eric V.

    2009-01-01

    A general approach to high-throughput screening of enantiomeric excess (ee) and concentration was developed by using indicator displacement assays (IDAs), and the protocol was then applied to the vicinal diol hydrobenzoin. The method involves the sequential utilization of what we define herein as screening, training, and analysis plates. Several enantioselective boronic acid-based receptors were screened by using 96-well plates, both for their ability to discriminate the enantiomers of hydrobenzoin and to find their optimal pairing with indicators resulting in the largest optical responses. The best receptor/indicator combination was then used to train an artificial neural network to determine concentration and ee. To prove the practicality of the developed protocol, analysis plates were created containing true unknown samples of hydrobenzoin generated by established Sharpless asymmetric dihydroxylation reactions, and the best ligand was correctly identified. PMID:19332790

  6. Molecular recognition and self-assembly special feature: A general protocol for creating high-throughput screening assays for reaction yield and enantiomeric excess applied to hydrobenzoin.

    PubMed

    Shabbir, Shagufta H; Regan, Clinton J; Anslyn, Eric V

    2009-06-30

    A general approach to high-throughput screening of enantiomeric excess (ee) and concentration was developed by using indicator displacement assays (IDAs), and the protocol was then applied to the vicinal diol hydrobenzoin. The method involves the sequential utilization of what we define herein as screening, training, and analysis plates. Several enantioselective boronic acid-based receptors were screened by using 96-well plates, both for their ability to discriminate the enantiomers of hydrobenzoin and to find their optimal pairing with indicators resulting in the largest optical responses. The best receptor/indicator combination was then used to train an artificial neural network to determine concentration and ee. To prove the practicality of the developed protocol, analysis plates were created containing true unknown samples of hydrobenzoin generated by established Sharpless asymmetric dihydroxylation reactions, and the best ligand was correctly identified.

  7. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    DOE PAGES

    Radak, Brian K.; Roux, Benoît

    2016-10-07

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less

  8. Social-aware data dissemination in opportunistic mobile social networks

    NASA Astrophysics Data System (ADS)

    Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Han, Xiaowei

    Opportunistic Mobile Social Networks (OMSNs), formed by mobile users with social relationships and characteristics, enhance spontaneous communication among users that opportunistically encounter each other. Such networks can be exploited to improve the performance of data forwarding. Discovering optimal relay nodes is one of the important issues for efficient data propagation in OMSNs. Although traditional centrality definitions to identify the nodes features in network, they cannot identify effectively the influential nodes for data dissemination in OMSNs. Existing protocols take advantage of spatial contact frequency and social characteristics to enhance transmission performance. However, existing protocols have not fully exploited the benefits of the relations and the effects between geographical information, social features and user interests. In this paper, we first evaluate these three characteristics of users and design a routing protocol called Geo-Social-Interest (GSI) protocol to select optimal relay nodes. We compare the performance of GSI using real INFOCOM06 data sets. The experiment results demonstrate that GSI overperforms the other protocols with highest data delivery ratio and low communication overhead.

  9. New Parameters for Higher Accuracy in the Computation of Binding Free Energy Differences upon Alanine Scanning Mutagenesis on Protein-Protein Interfaces.

    PubMed

    Simões, Inês C M; Costa, Inês P D; Coimbra, João T S; Ramos, Maria J; Fernandes, Pedro A

    2017-01-23

    Knowing how proteins make stable complexes enables the development of inhibitors to preclude protein-protein (P:P) binding. The identification of the specific interfacial residues that mostly contribute to protein binding, denominated as hot spots, is thus critical. Here, we refine an in silico alanine scanning mutagenesis protocol, based on a residue-dependent dielectric constant version of the Molecular Mechanics/Poisson-Boltzmann Surface Area method. We have used a large data set of structurally diverse P:P complexes to redefine the residue-dependent dielectric constants used in the determination of binding free energies. The accuracy of the method was validated through comparison with experimental data, considering the per-residue P:P binding free energy (ΔΔG binding ) differences upon alanine mutation. Different protocols were tested, i.e., a geometry optimization protocol and three molecular dynamics (MD) protocols: (1) one using explicit water molecules, (2) another with an implicit solvation model, and (3) a third where we have carried out an accelerated MD with explicit water molecules. Using a set of protein dielectric constants (within the range from 1 to 20) we showed that the dielectric constants of 7 for nonpolar and polar residues and 11 for charged residues (and histidine) provide optimal ΔΔG binding predictions. An overall mean unsigned error (MUE) of 1.4 kcal mol -1 relative to the experiment was achieved in 210 mutations only with geometry optimization, which was further reduced with MD simulations (MUE of 1.1 kcal mol -1 for the MD employing explicit solvent). This recalibrated method allows for a better computational identification of hot spots, avoiding expensive and time-consuming experiments or thermodynamic integration/ free energy perturbation/ uBAR calculations, and will hopefully help new drug discovery campaigns in their quest of searching spots of interest for binding small drug-like molecules at P:P interfaces.

  10. Optimization and comparison of simultaneous and separate acquisition protocols for dual isotope myocardial perfusion SPECT.

    PubMed

    Ghaly, Michael; Links, Jonathan M; Frey, Eric C

    2015-07-07

    Dual-isotope simultaneous-acquisition (DISA) rest-stress myocardial perfusion SPECT (MPS) protocols offer a number of advantages over separate acquisition. However, crosstalk contamination due to scatter in the patient and interactions in the collimator degrade image quality. Compensation can reduce the effects of crosstalk, but does not entirely eliminate image degradations. Optimizing acquisition parameters could further reduce the impact of crosstalk. In this paper we investigate the optimization of the rest Tl-201 energy window width and relative injected activities using the ideal observer (IO), a realistic digital phantom population and Monte Carlo (MC) simulated Tc-99m and Tl-201 projections as a means to improve image quality. We compared performance on a perfusion defect detection task for Tl-201 acquisition energy window widths varying from 4 to 40 keV centered at 72 keV for a camera with a 9% energy resolution. We also investigated 7 different relative injected activities, defined as the ratio of Tc-99m and Tl-201 activities, while keeping the total effective dose constant at 13.5 mSv. For each energy window and relative injected activity, we computed the IO test statistics using a Markov chain Monte Carlo (MCMC) method for an ensemble of 1,620 triplets of fixed and reversible defect-present, and defect-absent noisy images modeling realistic background variations. The volume under the 3-class receiver operating characteristic (ROC) surface (VUS) was estimated and served as the figure of merit. For simultaneous acquisition, the IO suggested that relative Tc-to-Tl injected activity ratios of 2.6-5 and acquisition energy window widths of 16-22% were optimal. For separate acquisition, we observed a broad range of optimal relative injected activities from 2.6 to 12.1 and acquisition energy window of widths 16-22%. A negative correlation between Tl-201 injected activity and the width of the Tl-201 energy window was observed in these ranges. The results also suggested that DISA methods could potentially provide image quality as good as that obtained with separate acquisition protocols. We compared observer performance for the optimized protocols and the current clinical protocol using separate acquisition. The current clinical protocols provided better performance at a cost of injecting the patient with approximately double the injected activity of Tc-99m and Tl-201, resulting in substantially increased radiation dose.

  11. A streamlined method for analysing genome-wide DNA methylation patterns from low amounts of FFPE DNA.

    PubMed

    Ludgate, Jackie L; Wright, James; Stockwell, Peter A; Morison, Ian M; Eccles, Michael R; Chatterjee, Aniruddha

    2017-08-31

    Formalin fixed paraffin embedded (FFPE) tumor samples are a major source of DNA from patients in cancer research. However, FFPE is a challenging material to work with due to macromolecular fragmentation and nucleic acid crosslinking. FFPE tissue particularly possesses challenges for methylation analysis and for preparing sequencing-based libraries relying on bisulfite conversion. Successful bisulfite conversion is a key requirement for sequencing-based methylation analysis. Here we describe a complete and streamlined workflow for preparing next generation sequencing libraries for methylation analysis from FFPE tissues. This includes, counting cells from FFPE blocks and extracting DNA from FFPE slides, testing bisulfite conversion efficiency with a polymerase chain reaction (PCR) based test, preparing reduced representation bisulfite sequencing libraries and massively parallel sequencing. The main features and advantages of this protocol are: An optimized method for extracting good quality DNA from FFPE tissues. An efficient bisulfite conversion and next generation sequencing library preparation protocol that uses 50 ng DNA from FFPE tissue. Incorporation of a PCR-based test to assess bisulfite conversion efficiency prior to sequencing. We provide a complete workflow and an integrated protocol for performing DNA methylation analysis at the genome-scale and we believe this will facilitate clinical epigenetic research that involves the use of FFPE tissue.

  12. Non-Orthogonal Random Access in MIMO Cognitive Radio Networks: Beamforming, Power Allocation, and Opportunistic Transmission

    PubMed Central

    Lin, Huifa; Shin, Won-Yong

    2017-01-01

    We study secondary random access in multi-input multi-output cognitive radio networks, where a slotted ALOHA-type protocol and successive interference cancellation are used. We first introduce three types of transmit beamforming performed by secondary users, where multiple antennas are used to suppress the interference at the primary base station and/or to increase the received signal power at the secondary base station. Then, we show a simple decentralized power allocation along with the equivalent single-antenna conversion. To exploit the multiuser diversity gain, an opportunistic transmission protocol is proposed, where the secondary users generating less interference are opportunistically selected, resulting in a further reduction of the interference temperature. The proposed methods are validated via computer simulations. Numerical results show that increasing the number of transmit antennas can greatly reduce the interference temperature, while increasing the number of receive antennas leads to a reduction of the total transmit power. Optimal parameter values of the opportunistic transmission protocol are examined according to three types of beamforming and different antenna configurations, in terms of maximizing the cognitive transmission capacity. All the beamforming, decentralized power allocation, and opportunistic transmission protocol are performed by the secondary users in a decentralized manner, thus resulting in an easy implementation in practice. PMID:28076402

  13. A secure cluster-based multipath routing protocol for WMSNs.

    PubMed

    Almalkawi, Islam T; Zapata, Manel Guerrero; Al-Karaki, Jamal N

    2011-01-01

    The new characteristics of Wireless Multimedia Sensor Network (WMSN) and its design issues brought by handling different traffic classes of multimedia content (video streams, audio, and still images) as well as scalar data over the network, make the proposed routing protocols for typical WSNs not directly applicable for WMSNs. Handling real-time multimedia data requires both energy efficiency and QoS assurance in order to ensure efficient utility of different capabilities of sensor resources and correct delivery of collected information. In this paper, we propose a Secure Cluster-based Multipath Routing protocol for WMSNs, SCMR, to satisfy the requirements of delivering different data types and support high data rate multimedia traffic. SCMR exploits the hierarchical structure of powerful cluster heads and the optimized multiple paths to support timeliness and reliable high data rate multimedia communication with minimum energy dissipation. Also, we present a light-weight distributed security mechanism of key management in order to secure the communication between sensor nodes and protect the network against different types of attacks. Performance evaluation from simulation results demonstrates a significant performance improvement comparing with existing protocols (which do not even provide any kind of security feature) in terms of average end-to-end delay, network throughput, packet delivery ratio, and energy consumption.

  14. Non-Orthogonal Random Access in MIMO Cognitive Radio Networks: Beamforming, Power Allocation, and Opportunistic Transmission.

    PubMed

    Lin, Huifa; Shin, Won-Yong

    2017-01-01

    We study secondary random access in multi-input multi-output cognitive radio networks, where a slotted ALOHA-type protocol and successive interference cancellation are used. We first introduce three types of transmit beamforming performed by secondary users, where multiple antennas are used to suppress the interference at the primary base station and/or to increase the received signal power at the secondary base station. Then, we show a simple decentralized power allocation along with the equivalent single-antenna conversion. To exploit the multiuser diversity gain, an opportunistic transmission protocol is proposed, where the secondary users generating less interference are opportunistically selected, resulting in a further reduction of the interference temperature. The proposed methods are validated via computer simulations. Numerical results show that increasing the number of transmit antennas can greatly reduce the interference temperature, while increasing the number of receive antennas leads to a reduction of the total transmit power. Optimal parameter values of the opportunistic transmission protocol are examined according to three types of beamforming and different antenna configurations, in terms of maximizing the cognitive transmission capacity. All the beamforming, decentralized power allocation, and opportunistic transmission protocol are performed by the secondary users in a decentralized manner, thus resulting in an easy implementation in practice.

  15. A Secure Cluster-Based Multipath Routing Protocol for WMSNs

    PubMed Central

    Almalkawi, Islam T.; Zapata, Manel Guerrero; Al-Karaki, Jamal N.

    2011-01-01

    The new characteristics of Wireless Multimedia Sensor Network (WMSN) and its design issues brought by handling different traffic classes of multimedia content (video streams, audio, and still images) as well as scalar data over the network, make the proposed routing protocols for typical WSNs not directly applicable for WMSNs. Handling real-time multimedia data requires both energy efficiency and QoS assurance in order to ensure efficient utility of different capabilities of sensor resources and correct delivery of collected information. In this paper, we propose a Secure Cluster-based Multipath Routing protocol for WMSNs, SCMR, to satisfy the requirements of delivering different data types and support high data rate multimedia traffic. SCMR exploits the hierarchical structure of powerful cluster heads and the optimized multiple paths to support timeliness and reliable high data rate multimedia communication with minimum energy dissipation. Also, we present a light-weight distributed security mechanism of key management in order to secure the communication between sensor nodes and protect the network against different types of attacks. Performance evaluation from simulation results demonstrates a significant performance improvement comparing with existing protocols (which do not even provide any kind of security feature) in terms of average end-to-end delay, network throughput, packet delivery ratio, and energy consumption. PMID:22163854

  16. Maternal-Fetal Monitoring of Opioid-Exposed Pregnancies: Analysis of a Pilot Community-Based Protocol and Review of the Literature.

    PubMed

    Ryan, Gareth; Dooley, Joe; Windrim, Rory; Bollinger, Megan; Gerber Finn, Lianne; Kelly, Len

    2017-06-01

    To describe/analyse a novel, community-based prenatal monitoring protocol for opioid-exposed pregnancies developed by our centre in 2014 to optimize prenatal care for this population. A literature review of published monitoring protocols for this population is also presented. Retrospective comparison of pre-protocol (n = 215) and post-protocol (n = 251) cohorts. Medline and Embase were searched between 2000-2016 using MeSH terms: [fetal monitoring OR prenatal care] AND [opioid-related disorders OR substance-related disorders] in Medline and [fetal monitoring OR prenatal care] AND [opiate addiction OR substance abuse] in Embase, producing 518 results. Thirteen studies included protocols for monitoring opioid-exposed pregnancies. No comprehensive monitoring protocols with high-quality supporting evidence were found. We evaluated 466 opioid-exposed pregnancies, 215 before and 251 after introduction of the protocol. Since implementation, there was a significant increase in the number of opioid-exposed patients who have underwent urine drug screening (72.6% to 89.2%, P < 0.0001); a significant reduction in the number of urine drug screenings positive for illicit opioids (50.2% to 29.1%, P < 0.0001); and a significant increase in the number of patients who discontinued illicit opioid use by the time of delivery (24.7% to 39.4%, P < 0.01). There was no difference in the CS rate (27.4% vs. 26.3%, P > 0.05). There were no observed differences in the rate of preterm birth, birth weight <2500 g, or Apgar score <7 (P > 0.05). Care of women with increased opioid use during pregnancy is an important but under-studied health issue. A novel protocol for focused antenatal care provision for women with opioid-exposed pregnancies improves standard of care and maternal/fetal outcomes. Copyright © 2017. Published by Elsevier Inc.

  17. An optimized 13C-urea breath test for the diagnosis of H pylori infection

    PubMed Central

    Campuzano-Maya, Germán

    2007-01-01

    AIM: To validate an optimized 13C-urea breath test (13C-UBT) protocol for the diagnosis of H pylori infection that is cost-efficient and maintains excellent diagnostic accuracy. METHODS: 70 healthy volunteers were tested with two simplified 13C-UBT protocols, with test meal (Protocol 2) and without test meal (Protocol 1). Breath samples were collected at 10, 20 and 30 min after ingestion of 50 mg 13C-urea dissolved in 10 mL of water, taken as a single swallow, followed by 200 mL of water (pH 6.0) and a circular motion around the waistline to homogenize the urea solution. Performance of both protocols was analyzed at various cut-off values. Results were validated against the European protocol. RESULTS: According to the reference protocol, 65.7% individuals were positive for H pylori infection and 34.3% were negative. There were no significant differences in the ability of both protocols to correctly identify positive and negative H pylori individuals. However, only Protocol 1 with no test meal achieved accuracy, sensitivity, specificity, positive and negative predictive values of 100%. The highest values achieved by Protocol 2 were 98.57%, 97.83%, 100%, 100% and 100%, respectively. CONCLUSION: A 10 min, 50 mg 13C-UBT with no test meal using a cut-off value of 2-2.5 is a highly accurate test for the diagnosis of H pylori infection at a reduced cost. PMID:17907288

  18. Enabling Next-Generation Multicore Platforms in Embedded Applications

    DTIC Science & Technology

    2014-04-01

    mapping to sets 129 − 256 ) to the second page in memory, color 2 (sets 257 − 384) to the third page, and so on. Then, after the 32nd page, all 212 sets...the Real-Time Nested Locking Protocol (RNLP) [56], a recently developed multiprocessor real-time locking protocol that optimally supports the...RELEASE; DISTRIBUTION UNLIMITED 15 In general, the problems of optimally assigning tasks to processors and colors to tasks are both NP-hard in the

  19. Suicide Risk Protocols: Addressing the Needs of High Risk Youths Identified through Suicide Prevention Efforts and in Clinical Settings

    ERIC Educational Resources Information Center

    Heilbron, Nicole; Goldston, David; Walrath, Christine; Rodi, Michael; McKeon, Richard

    2013-01-01

    Several agencies have emphasized the importance of establishing clear protocols or procedures to address the needs of youths who are identified as suicidal through suicide prevention programs or in emergency department settings. What constitutes optimal guidelines for developing and implementing such protocols, however, is unclear. At the request…

  20. In vitro selection and amplification protocols for isolation of aptameric sensors for small molecules

    PubMed Central

    Yang, Kyung-Ae; Pei, Renjun; Stojanovic, Milan N.

    2016-01-01

    We recently optimized a procedure that directly yields aptameric sensors for small molecules in so-called structure-switching format. The protocol has a high success rate, short time, and is sufficiently simple to be readily implemented in a non-specialist laboratory. We provide a stepwise guide to this selection protocol. PMID:27155227

  1. Granulocyte-colony stimulating factor in the prevention of postoperative infectious complications and sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). Protocol of a controlled clinical trial developed by consensus of an international study group. Part three: individual patient, complication algorithm and quality manage.

    PubMed

    Stinner, B; Bauhofer, A; Lorenz, W; Rothmund, M; Plaul, U; Torossian, A; Celik, I; Sitter, H; Koller, M; Black, A; Duda, D; Encke, A; Greger, B; van Goor, H; Hanisch, E; Hesterberg, R; Klose, K J; Lacaine, F; Lorijn, R H; Margolis, C; Neugebauer, E; Nyström, P O; Reemst, P H; Schein, M; Solovera, J

    2001-05-01

    Presentation of a new type of a study protocol for evaluation of the effectiveness of an immune modifier (rhG-CSF, filgrastim): prevention of postoperative infectious complications and of sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). A randomised, placebo controlled, double-blinded, single-centre study is performed at an University Hospital (n = 40 patients for each group). This part presents the course of the individual patient and a complication algorithm for the management of anastomotic leakage and quality management. In part three of the protocol, the three major sections include: The course of the individual patient using a comprehensive graphic display, including the perioperative period, hospital stay and post discharge outcome. A center based clinical practice guideline for the management of the most important postoperative complication--anastomotic leakage--including evidence based support for each step of the algorithm. Data management, ethics and organisational structure. Future studies with immune modifiers will also fail if not better structured (reduction of variance) to achieve uniform patient management in a complex clinical scenario. This new type of a single-centre trial aims to reduce the gap between animal experiments and clinical trials or--if it fails--at least demonstrates new ways for explaining the failures.

  2. FDA approved drugs complexed to their targets: evaluating pose prediction accuracy of docking protocols.

    PubMed

    Bohari, Mohammed H; Sastry, G Narahari

    2012-09-01

    Efficient drug discovery programs can be designed by utilizing existing pools of knowledge from the already approved drugs. This can be achieved in one way by repositioning of drugs approved for some indications to newer indications. Complex of drug to its target gives fundamental insight into molecular recognition and a clear understanding of putative binding site. Five popular docking protocols, Glide, Gold, FlexX, Cdocker and LigandFit have been evaluated on a dataset of 199 FDA approved drug-target complexes for their accuracy in predicting the experimental pose. Performance for all the protocols is assessed at default settings, with root mean square deviation (RMSD) between the experimental ligand pose and the docked pose of less than 2.0 Å as the success criteria in predicting the pose. Glide (38.7 %) is found to be the most accurate in top ranked pose and Cdocker (58.8 %) in top RMSD pose. Ligand flexibility is a major bottleneck in failure of docking protocols to correctly predict the pose. Resolution of the crystal structure shows an inverse relationship with the performance of docking protocol. All the protocols perform optimally when a balanced type of hydrophilic and hydrophobic interaction or dominant hydrophilic interaction exists. Overall in 16 different target classes, hydrophobic interactions dominate in the binding site and maximum success is achieved for all the docking protocols in nuclear hormone receptor class while performance for the rest of the classes varied based on individual protocol.

  3. Low-dose computed tomography scans with automatic exposure control for patients of different ages undergoing cardiac PET/CT and SPECT/CT.

    PubMed

    Yang, Ching-Ching; Yang, Bang-Hung; Tu, Chun-Yuan; Wu, Tung-Hsin; Liu, Shu-Hsin

    2017-06-01

    This study aimed to evaluate the efficacy of automatic exposure control (AEC) in order to optimize low-dose computed tomography (CT) protocols for patients of different ages undergoing cardiac PET/CT and single-photon emission computed tomography/computed tomography (SPECT/CT). One PET/CT and one SPECT/CT were used to acquire CT images for four anthropomorphic phantoms representative of 1-year-old, 5-year-old and 10-year-old children and an adult. For the hybrid systems investigated in this study, the radiation dose and image quality of cardiac CT scans performed with AEC activated depend mainly on the selection of a predefined image quality index. Multiple linear regression methods were used to analyse image data from anthropomorphic phantom studies to investigate the effects of body size and predefined image quality index on CT radiation dose in cardiac PET/CT and SPECT/CT scans. The regression relationships have a coefficient of determination larger than 0.9, indicating a good fit to the data. According to the regression models, low-dose protocols using the AEC technique were optimized for patients of different ages. In comparison with the standard protocol with AEC activated for adult cardiac examinations used in our clinical routine practice, the optimized paediatric protocols in PET/CT allow 32.2, 63.7 and 79.2% CT dose reductions for anthropomorphic phantoms simulating 10-year-old, 5-year-old and 1-year-old children, respectively. The corresponding results for cardiac SPECT/CT are 8.4, 51.5 and 72.7%. AEC is a practical way to reduce CT radiation dose in cardiac PET/CT and SPECT/CT, but the AEC settings should be determined properly for optimal effect. Our results show that AEC does not eliminate the need for paediatric protocols and CT examinations using the AEC technique should be optimized for paediatric patients to reduce the radiation dose as low as reasonably achievable.

  4. Dendritic Immunotherapy Improvement for an Optimal Control Murine Model

    PubMed Central

    Chimal-Eguía, J. C.; Castillo-Montiel, E.

    2017-01-01

    Therapeutic protocols in immunotherapy are usually proposed following the intuition and experience of the therapist. In order to deduce such protocols mathematical modeling, optimal control and simulations are used instead of the therapist's experience. Clinical efficacy of dendritic cell (DC) vaccines to cancer treatment is still unclear, since dendritic cells face several obstacles in the host environment, such as immunosuppression and poor transference to the lymph nodes reducing the vaccine effect. In view of that, we have created a mathematical murine model to measure the effects of dendritic cell injections admitting such obstacles. In addition, the model considers a therapy given by bolus injections of small duration as opposed to a continual dose. Doses timing defines the therapeutic protocols, which in turn are improved to minimize the tumor mass by an optimal control algorithm. We intend to supplement therapist's experience and intuition in the protocol's implementation. Experimental results made on mice infected with melanoma with and without therapy agree with the model. It is shown that the dendritic cells' percentage that manages to reach the lymph nodes has a crucial impact on the therapy outcome. This suggests that efforts in finding better methods to deliver DC vaccines should be pursued. PMID:28912828

  5. Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.

  6. Cryopreservation of sperm in Grey mullet Mugil cephalus (Linnaeus, 1758).

    PubMed

    Balamurugan, Ramachandran; Munuswamy, Natesan

    2017-10-01

    The aim of this study was to document the effects of cryopreservation on sperm motility and viability in Grey mullet Mugil cephalus. Cryopreservation of sperm was attempted by using two extenders ringer solution for marine fish (RSMF) and V2 extender (V2E) and cryoprotectants dimethylacetamide (DMA), dimethylsulfoxide (DMSO), ethylene glycol (EG), glycerol (GLY), propylene glycol (PG) and methanol (MeOH). Cryoprotectants were assessed at different concentrations individually as well as in combination with varying equilibration times (10 and 30min). For optimization of freezing rate, four freezing protocols (-5, -10, -20 and -30°C/min) were evaluated. After achieving final temperature, samples were plunged in liquid nitrogen (-196°C) and stored for a week. Samples were subsequently thawed in a water bath at 30°C for assessment of sperm motility and viability. Results indicated that cryomedium constituting of V2E extender+10% glycerol with a dilution ratio of 1:1 (sperm: cryomedium) at an equilibration time of 5 to- 10min and freezing rate of -20°C/min was more desirable compared with other factors that were assessed. Use of this protocol resulted in retaining the greatest sperm motility grade 3.0±0.0 (50%-80% sperm movement, fast swimming) and 48.19±3.12% of sperm viability. The results of the present study, therefore, provide base-line data for establishing a protocol for sperm cryopreservation in M.cephalus. Further studies are, however, required for optimization of most suitable sperm cryopreservation protocol. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Comparison of different cooling rates for fibroblast and keratinocyte cryopreservation.

    PubMed

    Naaldijk, Yahaira; Friedrich-Stöckigt, Annett; Sethe, Sebastian; Stolzing, Alexandra

    2016-10-01

    Easy, cost-effective and reliable cryopreservation protocols are crucial for the successful and effective application of tissue engineering. Several different protocols are in use, but no comprehensive comparisons across different machine-based and manual methods have been made. Here, we compare the effects of different cooling rates on the post-thaw survival and proliferative capacity of two basic cell lines for skin tissue engineering fibroblasts and keratinocytes, cultured and frozen in suspension or as a monolayer. We demonstrate that effectiveness of cryopreservation cannot be reliably determined immediately after thawing: the results at this stage were not indicative of cell growth in culture 3 days post-thaw. Cryopreservation of fibroblasts in an adherent state greatly diminishes their subsequent growth potential. This was not observed when freezing in suspension. In keratinocytes, however, adherent freezing is as effective as freezing in suspension, which could lead to significant cost and labour savings in a tissue-engineering environment. The 'optimal' cryopreservation protocol depends on cell type and intended use. Where time, ease and cost are dominant factors, the direct freezing into a nitrogen tank (straight freeze) approach remains a viable method. The most effective solution across the board, as measured by viability 3 days post-thaw, was the commonly used, freezing container method. Where machine-controlled cryopreservation is deemed important for tissue-engineering Good Manufacturing Practice, we present results using a portfolio of different cooling rates, identifying the 'optimal' protocol depending on cell type and culture method. Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Optimization of a secondary VOI protocol for lung imaging in a clinical CT scanner.

    PubMed

    Larsen, Thomas C; Gopalakrishnan, Vissagan; Yao, Jianhua; Nguyen, Catherine P; Chen, Marcus Y; Moss, Joel; Wen, Han

    2018-05-21

    We present a solution to meet an unmet clinical need of an in-situ "close look" at a pulmonary nodule or at the margins of a pulmonary cyst revealed by a primary (screening) chest CT while the patient is still in the scanner. We first evaluated options available on current whole-body CT scanners for high resolution screening scans, including ROI reconstruction of the primary scan data and HRCT, but found them to have insufficient SNR in lung tissue or discontinuous slice coverage. Within the capabilities of current clinical CT systems, we opted for the solution of a secondary, volume-of-interest (VOI) protocol where the radiation dose is focused into a short-beam axial scan at the z position of interest, combined with a small-FOV reconstruction at the xy position of interest. The objective of this work was to design a VOI protocol that is optimized for targeted lung imaging in a clinical whole-body CT system. Using a chest phantom containing a lung-mimicking foam insert with a simulated cyst, we identified the appropriate scan mode and optimized both the scan and recon parameters. The VOI protocol yielded 3.2 times the texture amplitude-to-noise ratio in the lung-mimicking foam when compared to the standard chest CT, and 8.4 times the texture difference between the lung mimicking and reference foams. It improved details of the wall of the simulated cyst and better resolution in a line-pair insert. The Effective Dose of the secondary VOI protocol was 42% on average and up to 100% in the worst-case scenario of VOI positioning relative to the standard chest CT. The optimized protocol will be used to obtain detailed CT textures of pulmonary lesions, which are biomarkers for the type and stage of lung diseases. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  9. Optimizing the Information Presentation on Mining Potential by using Web Services Technology with Restful Protocol

    NASA Astrophysics Data System (ADS)

    Abdillah, T.; Dai, R.; Setiawan, E.

    2018-02-01

    This study aims to develop the application of Web Services technology with RestFul Protocol to optimize the information presentation on mining potential. This study used User Interface Design approach for the information accuracy and relevance as well as the Web Service for the reliability in presenting the information. The results show that: the information accuracy and relevance regarding mining potential can be seen from the achievement of User Interface implementation in the application that is based on the following rules: The consideration of the appropriate colours and objects, the easiness of using the navigation, and users’ interaction with the applications that employs symbols and languages understood by the users; the information accuracy and relevance related to mining potential can be observed by the information presented by using charts and Tool Tip Text to help the users understand the provided chart/figure; the reliability of the information presentation is evident by the results of Web Services testing in Figure 4.5.6. This study finds out that User Interface Design and Web Services approaches (for the access of different Platform apps) are able to optimize the presentation. The results of this study can be used as a reference for software developers and Provincial Government of Gorontalo.

  10. Generation of Isogenic Human iPS Cell Line Precisely Corrected by Genome Editing Using the CRISPR/Cas9 System.

    PubMed

    Grobarczyk, Benjamin; Franco, Bénédicte; Hanon, Kevin; Malgrange, Brigitte

    2015-10-01

    Genome engineering and human iPS cells are two powerful technologies, which can be combined to highlight phenotypic differences and identify pathological mechanisms of complex diseases by providing isogenic cellular material. However, very few data are available regarding precise gene correction in human iPS cells. Here, we describe an optimized stepwise protocol to deliver CRISPR/Cas9 plasmids in human iPS cells. We highlight technical issues especially those associated to human stem cell culture and to the correction of a point mutation to obtain isogenic iPS cell line, without inserting any resistance cassette. Based on a two-steps clonal isolation protocol (mechanical picking followed by enzymatic dissociation), we succeed to select and expand corrected human iPS cell line with a great efficiency (more than 2% of the sequenced colonies). This protocol can also be used to obtain knock-out cell line from healthy iPS cell line by the NHEJ pathway (with about 15% efficiency) and reproduce disease phenotype. In addition, we also provide protocols for functional validation tests after every critical step.

  11. Quantum teleportation scheme by selecting one of multiple output ports

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi; Hiroshima, Tohya

    2009-04-01

    The scheme of quantum teleportation, where Bob has multiple (N) output ports and obtains the teleported state by simply selecting one of the N ports, is thoroughly studied. We consider both the deterministic version and probabilistic version of the teleportation scheme aiming to teleport an unknown state of a qubit. Moreover, we consider two cases for each version: (i) the state employed for the teleportation is fixed to a maximally entangled state and (ii) the state is also optimized as well as Alice’s measurement. We analytically determine the optimal protocols for all the four cases and show the corresponding optimal fidelity or optimal success probability. All these protocols can achieve the perfect teleportation in the asymptotic limit of N→∞ . The entanglement properties of the teleportation scheme are also discussed.

  12. Image quality comparison between single energy and dual energy CT protocols for hepatic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Yuan, E-mail: yuanyao@stanford.edu; Pelc, Nor

    Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition maymore » be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings. Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study. Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images. Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivak, David; Crooks, Gavin

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  14. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  15. An application of different dioids in public key cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durcheva, Mariana I., E-mail: mdurcheva66@gmail.com

    2014-11-18

    Dioids provide a natural framework for analyzing a broad class of discrete event dynamical systems such as the design and analysis of bus and railway timetables, scheduling of high-throughput industrial processes, solution of combinatorial optimization problems, the analysis and improvement of flow systems in communication networks. They have appeared in several branches of mathematics such as functional analysis, optimization, stochastic systems and dynamic programming, tropical geometry, fuzzy logic. In this paper we show how to involve dioids in public key cryptography. The main goal is to create key – exchange protocols based on dioids. Additionally the digital signature scheme ismore » presented.« less

  16. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  17. New clinical insights for transiently evoked otoacoustic emission protocols.

    PubMed

    Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw

    2009-08-01

    The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.

  18. An optimized framework for quantitative magnetization transfer imaging of the cervical spinal cord in vivo

    PubMed Central

    Grussu, Francesco; Ianus, Andrada; Schneider, Torben; Prados, Ferran; Fairney, James; Ourselin, Sebastien; Alexander, Daniel C.; Cercignani, Mara; Gandini Wheeler‐Kingshott, Claudia A.M.; Samson, Rebecca S.

    2017-01-01

    Purpose To develop a framework to fully characterize quantitative magnetization transfer indices in the human cervical cord in vivo within a clinically feasible time. Methods A dedicated spinal cord imaging protocol for quantitative magnetization transfer was developed using a reduced field‐of‐view approach with echo planar imaging (EPI) readout. Sequence parameters were optimized based in the Cramer‐Rao‐lower bound. Quantitative model parameters (i.e., bound pool fraction, free and bound pool transverse relaxation times [ T2F, T2B], and forward exchange rate [k FB]) were estimated implementing a numerical model capable of dealing with the novelties of the sequence adopted. The framework was tested on five healthy subjects. Results Cramer‐Rao‐lower bound minimization produces optimal sampling schemes without requiring the establishment of a steady‐state MT effect. The proposed framework allows quantitative voxel‐wise estimation of model parameters at the resolution typically used for spinal cord imaging (i.e. 0.75 × 0.75 × 5 mm3), with a protocol duration of ∼35 min. Quantitative magnetization transfer parametric maps agree with literature values. Whole‐cord mean values are: bound pool fraction = 0.11(±0.01), T2F = 46.5(±1.6) ms, T2B = 11.0(±0.2) µs, and k FB = 1.95(±0.06) Hz. Protocol optimization has a beneficial effect on reproducibility, especially for T2B and k FB. Conclusion The framework developed enables robust characterization of spinal cord microstructure in vivo using qMT. Magn Reson Med 79:2576–2588, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28921614

  19. Protein extraction and gel-based separation methods to analyze responses to pathogens in carnation (Dianthus caryophyllus L).

    PubMed

    Ardila, Harold Duban; Fernández, Raquel González; Higuera, Blanca Ligia; Redondo, Inmaculada; Martínez, Sixta Tulia

    2014-01-01

    We are currently using a 2-DE-based proteomics approach to study plant responses to pathogenic fungi by using the carnation (Dianthus caryophyllus L)-Fusarium oxysporum f. sp. dianthi pathosystem. It is clear that the protocols for the first stages of a standard proteomics workflow must be optimized to each biological system and objectives of the research. The optimization procedure for the extraction and separation of proteins by 1-DE and 2-DE in the indicated system is reported. This strategy can be extrapolated to other plant-pathogen interaction systems in order to perform an evaluation of the changes in the host protein profile caused by the pathogen and to identify proteins which, at early stages, are involved or implicated in the plant defense response.

  20. Optimized MOL-PCR for Characterization of Microbial Pathogens.

    PubMed

    Wuyts, Véronique; Roosens, Nancy H C; Bertrand, Sophie; Marchal, Kathleen; De Keersmaecker, Sigrid C J

    2016-01-06

    Characterization of microbial pathogens is necessary for surveillance, outbreak detection, and tracing of outbreak sources. This unit describes a multiplex oligonucleotide ligation-PCR (MOL-PCR) optimized for characterization of microbial pathogens. With MOL-PCR, different types of markers, like unique sequences, single-nucleotide polymorphisms (SNPs) and indels, can be simultaneously analyzed in one assay. This assay consists of a multiplex ligation for detection of the markers, a singleplex PCR for signal amplification, and hybridization to MagPlex-TAG beads for readout on a Luminex platform after fluorescent staining. The current protocol describes the MOL-PCR, as well as methods for DNA isolation, probe design, and data interpretation and it is based on an optimized MOL-PCR assay for subtyping of Salmonella Typhimurium. Copyright © 2016 John Wiley & Sons, Inc.

  1. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices

    PubMed Central

    Marin, Leandro; Piotr Pawlowski, Marcin; Jara, Antonio

    2015-01-01

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677

  2. Optimal control of fast and high-fidelity quantum state transfer in spin-1/2 chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiong-Peng; Shao, Bin, E-mail: sbin610@bit.edu.cn; Hu, Shuai

    Spin chains are promising candidates for quantum communication and computation. Using quantum optimal control (OC) theory based on the Krotov method, we present a protocol to perform quantum state transfer with fast and high fidelity by only manipulating the boundary spins in a quantum spin-1/2 chain. The achieved speed is about one order of magnitude faster than that is possible in the Lyapunov control case for comparable fidelities. Additionally, it has a fundamental limit for OC beyond which optimization is not possible. The controls are exerted only on the couplings between the boundary spins and their neighbors, so that themore » scheme has good scalability. We also demonstrate that the resulting OC scheme is robust against disorder in the chain.« less

  3. Continuous-variable measurement-device-independent quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong

    2018-04-01

    The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.

  4. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    PubMed Central

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  5. Applying Semantic Web Concepts to Support Net-Centric Warfare Using the Tactical Assessment Markup Language (TAML)

    DTIC Science & Technology

    2006-06-01

    SPARQL SPARQL Protocol and RDF Query Language SQL Structured Query Language SUMO Suggested Upper Merged Ontology SW... Query optimization algorithms are implemented in the Pellet reasoner in order to ensure querying a knowledge base is efficient . These algorithms...memory as a treelike structure in order for the data to be queried . XML Query (XQuery) is the standard language used when querying XML

  6. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels

    PubMed Central

    Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng

    2016-01-01

    Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks. PMID:27916917

  7. Rapid DNA transformation in Salmonella Typhimurium by the hydrogel exposure method.

    PubMed

    Elabed, Hamouda; Hamza, Rim; Bakhrouf, Amina; Gaddour, Kamel

    2016-07-01

    Even with advances in molecular cloning and DNA transformation, new or alternative methods that permit DNA penetration in Salmonella enterica subspecies enterica serovar Typhimurium are required in order to use this pathogen in biotechnological or medical applications. In this work, an adapted protocol of bacterial transformation with plasmid DNA based on the "Yoshida effect" was applied and optimized on Salmonella enterica serovar Typhimurium LT2 reference strain. The plasmid transference based on the use of sepiolite as acicular materials to promote cell piercing via friction forces produced by spreading on the surface of a hydrogel. The transforming mixture containing sepiolite nanofibers, bacterial cells to be transformed and plasmid DNA were plated directly on selective medium containing 2% agar. In order to improve the procedure, three variables were tested and the transformation of Salmonella cells was accomplished using plasmids pUC19 and pBR322. Using the optimized protocol on Salmonella LT2 strain, the efficiency was about 10(5) transformed cells per 10(9) subjected to transformation with 0.2μg plasmid DNA. In summary, the procedure is fast, offers opportune efficiency and promises to become one of the widely used transformation methods in laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels.

    PubMed

    Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng

    2016-11-30

    Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks.

  9. Ultrasound-guided corticosteroid injection therapy for juvenile idiopathic arthritis: 12-year care experience.

    PubMed

    Young, Cody M; Shiels, William E; Coley, Brian D; Hogan, Mark J; Murakami, James W; Jones, Karla; Higgins, Gloria C; Rennebohm, Robert M

    2012-12-01

    Intra-articular corticosteroid injections are a safe and effective treatment for patients with juvenile idiopathic arthritis. The potential scope of care in ultrasound-guided corticosteroid therapy in children and a joint-based corticosteroid dose protocol designed to optimize interdisciplinary care are not found in the current literature. The purpose of this study was to report the spectrum of care, technique and safety of ultrasound-guided corticosteroid injection therapy in patients with juvenile idiopathic arthritis and to propose an age-weight-joint-based corticosteroid dose protocol. A retrospective analysis was performed of 198 patients (ages 21 months to 28 years) referred for treatment of juvenile idiopathic arthritis with corticosteroid therapy. Symptomatic joints and tendon sheaths were treated as prescribed by the referring rheumatologist. An age-weight-joint-based dose protocol was developed and utilized for corticosteroid dose prescription. A total of 1,444 corticosteroid injections (1,340 joints, 104 tendon sheaths) were performed under US guidance. Injection sites included small, medium and large appendicular skeletal joints (upper extremity 497, lower extremity 837) and six temporomandibular joints. For patients with recurrent symptoms, 414 repeat injections were performed, with an average time interval of 17.7 months (range, 0.5-101.5 months) between injections. Complications occurred in 2.6% of injections and included subcutaneous tissue atrophy, skin hypopigmentation, erythema and pruritis. US-guided corticosteroid injection therapy provides dynamic, precise and safe treatment of a broad spectrum of joints and tendon sheaths throughout the entire pediatric musculoskeletal system. An age-weight-joint-based corticosteroid dose protocol is effective and integral to interdisciplinary care of patients with juvenile idiopathic arthritis.

  10. Improving Biomaterials Imaging for Nanotechnology: Rapid Methods for Protein Localization at Ultrastructural Level.

    PubMed

    Cano-Garrido, Olivia; Garcia-Fruitós, Elena; Villaverde, Antonio; Sánchez-Chardi, Alejandro

    2018-04-01

    The preparation of biological samples for electron microscopy is material- and time-consuming because it is often based on long protocols that also may produce artifacts. Protein labeling for transmission electron microscopy (TEM) is such an example, taking several days. However, for protein-based nanotechnology, high resolution imaging techniques are unique and crucial tools for studying the spatial distribution of these molecules, either alone or as components of biomaterials. In this paper, we tested two new short methods of immunolocalization for TEM, and compared them with a standard protocol in qualitative and quantitative approaches by using four protein-based nanoparticles. We reported a significant increase of labeling per area of nanoparticle in both new methodologies (H = 19.811; p < 0.001) with all the model antigens tested: GFP (H = 22.115; p < 0.001), MMP-2 (H = 19.579; p < 0.001), MMP-9 (H = 7.567; p < 0.023), and IFN-γ (H = 62.110; p < 0.001). We also found that the most suitable protocol for labeling depends on the nanoparticle's tendency to aggregate. Moreover, the shorter methods reduce artifacts, time (by 30%), residues, and reagents hindering, losing, or altering antigens, and obtaining a significant increase of protein localization (of about 200%). Overall, this study makes a step forward in the development of optimized protocols for the nanoscale localization of peptides and proteins within new biomaterials. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Engineering platform and experimental protocol for design and evaluation of a neurally-controlled powered transfemoral prosthesis.

    PubMed

    Zhang, Fan; Liu, Ming; Harper, Stephen; Lee, Michael; Huang, He

    2014-07-22

    To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.

  12. Purifying, Separating, and Concentrating Cells From a Sample Low in Biomass

    NASA Technical Reports Server (NTRS)

    Benardini, James N.; LaDuc, Myron T.; Diamond, Rochelle

    2012-01-01

    Frequently there is an inability to process and analyze samples of low biomass due to limiting amounts of relevant biomaterial in the sample. Furthermore, molecular biological protocols geared towards increasing the density of recovered cells and biomolecules of interest, by their very nature, also concentrate unwanted inhibitory humic acids and other particulates that have an adversarial effect on downstream analysis. A novel and robust fluorescence-activated cell-sorting (FACS)-based technology has been developed for purifying (removing cells from sampling matrices), separating (based on size, density, morphology), and concentrating cells (spores, prokaryotic, eukaryotic) from a sample low in biomass. The technology capitalizes on fluorescent cell-sorting technologies to purify and concentrate bacterial cells from a low-biomass, high-volume sample. Over the past decade, cell-sorting detection systems have undergone enhancements and increased sensitivity, making bacterial cell sorting a feasible concept. Although there are many unknown limitations with regard to the applicability of this technology to environmental samples (smaller cells, few cells, mixed populations), dogmatic principles support the theoretical effectiveness of this technique upon thorough testing and proper optimization. Furthermore, the pilot study from which this report is based proved effective and demonstrated this technology capable of sorting and concentrating bacterial endospore and bacterial cells of varying size and morphology. Two commercial off-the-shelf bacterial counting kits were used to optimize a bacterial stain/dye FACS protocol. A LIVE/DEAD BacLight Viability and Counting Kit was used to distinguish between the live and dead cells. A Bacterial Counting Kit comprising SYTO BC (mixture of SYTO dyes) was employed as a broad-spectrum bacterial counting agent. Optimization using epifluorescence microscopy was performed with these two dye/stains. This refined protocol was further validated using varying ratios and mixtures of cells to ensure homogenous staining compared to that of individual cells, and were utilized for flow analyzer and FACS labeling. This technology focuses on the purification and concentration of cells from low-biomass spacecraft assembly facility samples. Currently, purification and concentration of low-biomass samples plague planetary protection downstream analyses. Having a capability to use flow cytometry to concentrate cells out of low-biomass, high-volume spacecraft/ facility sample extracts will be of extreme benefit to the fields of planetary protection and astrobiology. Successful research and development of this novel methodology will significantly increase the knowledge base for designing more effective cleaning protocols, and ultimately lead to a more empirical and true account of the microbial diversity present on spacecraft surfaces. Refined cleaning and an enhanced ability to resolve microbial diversity may decrease the overall cost of spacecraft assembly and/or provide a means to begin to assess challenging planetary protection missions.

  13. Simulation platform of LEO satellite communication system based on OPNET

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhang, Yong; Li, Xiaozhuo; Wang, Chuqiao; Li, Haihao

    2018-02-01

    For the purpose of verifying communication protocol in the low earth orbit (LEO) satellite communication system, an Optimized Network Engineering Tool (OPNET) based simulation platform is built. Using the three-layer modeling mechanism, the network model, the node model and the process model of the satellite communication system are built respectively from top to bottom, and the protocol will be implemented by finite state machine and Proto-C language. According to satellite orbit parameters, orbit files are generated via Satellite Tool Kit (STK) and imported into OPNET, and the satellite nodes move along their orbits. The simulation platform adopts time-slot-driven mode, divides simulation time into continuous time slots, and allocates slot number for each time slot. A resource allocation strategy is simulated on this platform, and the simulation results such as resource utilization rate, system throughput and packet delay are analyzed, which indicate that this simulation platform has outstanding versatility.

  14. Motor imagery: lessons learned in movement science might be applicable for spaceflight

    PubMed Central

    Bock, Otmar; Schott, Nadja; Papaxanthis, Charalambos

    2015-01-01

    Before participating in a space mission, astronauts undergo parabolic-flight and underwater training to facilitate their subsequent adaptation to weightlessness. Unfortunately, similar training methods can’t be used to prepare re-adaptation to planetary gravity. Here, we propose a quick, simple and inexpensive approach that could be used to prepare astronauts both for the absence and for the renewed presence of gravity. This approach is based on motor imagery (MI), a process in which actions are produced in working memory without any overt output. Training protocols based on MI have repeatedly been shown to modify brain circuitry and to improve motor performance in healthy young adults, healthy seniors and stroke victims, and are routinely used to optimize performance of elite athletes. We propose to use similar protocols preflight, to prepare for weightlessness, and late inflight, to prepare for landing. PMID:26042004

  15. Optimal protocol for maximum work extraction in a feedback process with a time-varying potential

    NASA Astrophysics Data System (ADS)

    Kwon, Chulan

    2017-12-01

    The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.

  16. A Control-Based Multidimensional Approach to the Role of Optimism in the Use of Dementia Day Care Services.

    PubMed

    Contador, Israel; Fernández-Calvo, Bernardino; Palenzuela, David L; Campos, Francisco Ramos; Rivera-Navarro, Jesús; de Lucena, Virginia Menezes

    2015-11-01

    We examined whether grounded optimism and external locus of control are associated with admission to dementia day care centers (DCCs). A total of 130 informal caregivers were recruited from the Alzheimer's Association in Salamanca (northwest Spain). All caregivers completed an assessment protocol that included the Battery of Generalized Expectancies of Control Scales (BEEGC-20, acronym in Spanish) as well as depression and burden measures. The decision of the care setting at baseline assessment (own home vs DCC) was considered the main outcome measure in the logistic regression analyses. Grounded optimism was a preventive factor for admission (odds ratio [OR]: 0.34 and confidence interval [CI]: 0.15-0.75), whereas external locus of control (OR: 2.75, CI: 1.25-6.03) increased the probabilities of using DCCs. Depression mediated the relationship between optimism and DCCs, but this effect was not consistent for burden. Grounded optimism promotes the extension of care at home for patients with dementia. © The Author(s) 2013.

  17. A Multi-Hop Energy Neutral Clustering Algorithm for Maximizing Network Information Gathering in Energy Harvesting Wireless Sensor Networks.

    PubMed

    Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X

    2015-12-26

    Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.

  18. Designing a pain management protocol for craniotomy: A narrative review and consideration of promising practices

    PubMed Central

    Vacas, Susana; Van de Wiele, Barbara

    2017-01-01

    Background: Craniotomy is a relatively common surgical procedure with a high incidence of postoperative pain. Development of standardized pain management and enhanced recovery after surgery (ERAS) protocols are necessary and crucial to optimize outcomes and patient satisfaction and reduce health care costs. Methods: This work is based upon a literature search of published manuscripts (between 1996 and 2017) from Pubmed, Cochrane Central Register, and Google Scholar. It seeks to both synthesize and review our current scientific understanding of postcraniotomy pain and its part in neurosurgical ERAS protocols. Results: Strategies to ameliorate craniotomy pain demand interventions during all phases of patient care: preoperative, intraoperative, and postoperative interventions. Pain management should begin in the perioperative period with risk assessment, patient education, and premedication. In the intraoperative period, modifications in anesthesia technique, choice of opioids, acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs), regional techniques, dexmedetomidine, ketamine, lidocaine, corticosteroids, and interdisciplinary communication are all strategies to consider and possibly deploy. Opioids remain the mainstay for pain relief, but patient-controlled analgesia, NSAIDs, standardization of pain management, bio/behavioral interventions, modification of head dressings as well as patient-centric management are useful opportunities that potentially improve patient care. Conclusions: Future research on mechanisms, predictors, treatments, and pain management pathways will help define the combinations of interventions that optimize pain outcomes. PMID:29285407

  19. A Multi-Hop Energy Neutral Clustering Algorithm for Maximizing Network Information Gathering in Energy Harvesting Wireless Sensor Networks

    PubMed Central

    Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X.

    2015-01-01

    Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols. PMID:26712764

  20. Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.

    PubMed

    Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena

    2017-10-17

    Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines.

  1. Standardized Method for High-throughput Sterilization of Arabidopsis Seeds

    PubMed Central

    Calhoun, Chistopher S.; Grotewold, Erich; Brkljacic, Jelena

    2017-01-01

    Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines. PMID:29155739

  2. Evidence-based, multidisciplinary approach to the development of a crotalidae polyvalent antivenin (CroFab) protocol at a university hospital.

    PubMed

    Weant, Kyle A; Johnson, Peter N; Bowers, Rebecca C; Armitstead, John A

    2010-03-01

    Several thousand people are bitten annually by venomous snakes in the US. While the development of ovine Crotalidae polyvalent immune Fab antivenin (FabAV) for Crotalinae snakebite envenomations has greatly changed the way this clinical presentation is treated, multiple issues complicate its use. From patient assessment and evaluation, to medication preparation and administration, to the management of adverse drug reactions, the use of this antidote carries with it multiple points of possible medication variances. The inappropriate use of this agent can result in adverse patient consequences and a significant financial burden for both the hospital and the patient. To describe an evidence-based, multidisciplinary approach that was taken to ensure optimal, safe, and cost-effective treatment of patients with FabAV. Following an analysis of the available literature, a multidisciplinary committee was formed to construct a protocol for use of FabAV. This group included clinical pharmacists, pharmacy administrators, emergency medicine physicians who specialized in wilderness medicine and pharmacy residents. A multidisciplinary FabAV usage protocol was constructed and implemented to ensure appropriate patient evaluation, FabAV use and preparation, monitoring, and follow-up. This protocol was based on the available literature and the expert opinion of the committee. Through the use of a 24-hour in-house pharmacy resident on-call system, clinical pharmacy services were provided to ensure a multidisciplinary approach to the care of these patients emergently. Although limited, initial data show that this approach is effective and may result in substantial cost savings. Initial results from implementation of a protocol for use of FabAV have limited inappropriate use, reduced medication wastage, and decreased costs.

  3. Construction and Setup of a Bench-scale Algal Photosynthetic Bioreactor with Temperature, Light, and pH Monitoring for Kinetic Growth Tests.

    PubMed

    Karam, Amanda L; McMillan, Catherine C; Lai, Yi-Chun; de Los Reyes, Francis L; Sederoff, Heike W; Grunden, Amy M; Ranjithan, Ranji S; Levis, James W; Ducoste, Joel J

    2017-06-14

    The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software.

  4. Construction and Setup of a Bench-scale Algal Photosynthetic Bioreactor with Temperature, Light, and pH Monitoring for Kinetic Growth Tests

    PubMed Central

    Karam, Amanda L.; McMillan, Catherine C.; Lai, Yi-Chun; de los Reyes, Francis L.; Sederoff, Heike W.; Grunden, Amy M.; Ranjithan, Ranji S.; Levis, James W.; Ducoste, Joel J.

    2017-01-01

    The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software. PMID:28654054

  5. tuf-PCR-temporal temperature gradient gel electrophoresis for molecular detection and identification of staphylococci: application to breast milk and neonate gut microbiota.

    PubMed

    Filleron, Anne; Simon, Margaux; Hantova, Stefaniya; Jacquot, Aurélien; Cambonie, Gilles; Marchandin, Hélène; Jumas-Bilak, Estelle

    2014-03-01

    Coagulase negative staphylococci (CoNS) are a leading cause of infections in preterm infants, mostly involved in late-onset infection in low birth weight neonates. The epidemiology and pathophysiology of these infections remain unclear, notably because the causing agents are gathered in the artificial CoNS group. The aim of this work was to optimize the study of Staphylococcus species diversity in human breast milk and neonate stool, two sample types with bacterial communities dominated by CoNS, using PCR-temporal temperature gel electrophoresis based on the tuf gene. The optimized protocol identified 18 Staphylococcus species involved in neonate gut microbiota and infections and was applied to cultivation-independent study of breast milk and neonate stool. The efficiency, sensitivity, specificity and species discrimination of the proposed protocol appears suitable for patient follow-up in order to link microbiological data at the community level in milk and stool and interpret them from epidemiological and pathophysiological points of view. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Optimized methods of chromatin immunoprecipitation for profiling histone modifications in industrial microalgae Nannochloropsis spp.

    PubMed

    Wei, Li; Xu, Jian

    2018-06-01

    Epigenetic factors such as histone modifications play integral roles in plant development and stress response, yet their implications in algae remain poorly understood. In the industrial oleaginous microalgae Nannochloropsis spp., the lack of an efficient methodology for chromatin immunoprecipitation (ChIP), which determines the specific genomic location of various histone modifications, has hindered probing the epigenetic basis of their photosynthetic carbon conversion and storage as oil. Here, a detailed ChIP protocol was developed for Nannochloropsis oceanica, which represents a reliable approach for the analysis of histone modifications, chromatin state, and transcription factor-binding sites at the epigenetic level. Using ChIP-qPCR, genes related to photosynthetic carbon fixation in this microalga were systematically assessed. Furthermore, a ChIP-Seq protocol was established and optimized, which generated a genome-wide profile of histone modification events, using histone mark H3K9Ac as an example. These results are the first step for appreciation of the chromatin landscape in industrial oleaginous microalgae and for epigenetics-based microalgal feedstock development. © 2018 Phycological Society of America.

  7. What's new in perioperative nutritional support?

    PubMed

    Awad, Sherif; Lobo, Dileep N

    2011-06-01

    To highlight recent developments in the field of perioperative nutritional support by reviewing clinically pertinent English language articles from October 2008 to December 2010, that examined the effects of malnutrition on surgical outcomes, optimizing metabolic function and nutritional status preoperatively and postoperatively. Recognition of patients with or at risk of malnutrition remains poor despite the availability of numerous clinical aids and clear evidence of the adverse effects of poor nutritional status on postoperative clinical outcomes. Unfortunately, poor design and significant heterogeneity remain amongst many studies of nutritional interventions in surgical patients. Patients undergoing elective surgery should be managed within a multimodal pathway that includes evidence-based interventions to optimize nutritional status perioperatively. The aforementioned should include screening patients to identify those at high nutritional risk, perioperative immuno-nutrition, minimizing 'metabolic stress' and insulin resistance by preoperative conditioning with carbohydrate-based drinks, glutamine supplementation, minimal access surgery and enhanced recovery protocols. Finally gut-specific nutrients and prokinetics should be utilized to improve enteral feed tolerance thereby permitting early enteral feeding. An evidence-based multimodal pathway that includes interventions to optimize nutritional status may improve outcomes following elective surgery.

  8. Optimal eavesdropping in cryptography with three-dimensional quantum states.

    PubMed

    Bruss, D; Macchiavello, C

    2002-03-25

    We study optimal eavesdropping in quantum cryptography with three-dimensional systems, and show that this scheme is more secure against symmetric attacks than protocols using two-dimensional states. We generalize the according eavesdropping transformation to arbitrary dimensions, and discuss the connection with optimal quantum cloning.

  9. Blueberry (Vaccinium corymbosum L.).

    PubMed

    Song, Guo-Qing; Sink, Kenneth C

    2006-01-01

    Recent advances in plant biotechnology have led to a reliable and reproductive method for genetic transformation of blueberry. These efforts built on previous attempts at transient and stable transformation of blueberry that demonstrated the potential of Agrobacterium tumefaciens-mediated transformation, and as well, the difficulties of selecting and regenerating transgenic plants. As a prerequisite for successful stable transformation, efficient regeneration systems were required despite many reports on factors controlling shoot regeneration from leaf explants. The A. tumefaciens-mediated transformation protocol described in this chapter is based on combining efficient regeneration methods and the results of A. tumefaciens-mediated transient transformation studies to optimize selected parameters for gene transfer. The protocol has led to successful regeneration of transgenic plants of four commercially important highbush blueberry cultivars.

  10. Optimization of PMA-PCR Protocol for Viability Detection of Pathogens

    NASA Technical Reports Server (NTRS)

    Mikkelson, Brian J.; Lee, Christine M.; Ponce, Adrian

    2011-01-01

    This presented study demonstrates the need that PMA-PCR can be used to capture the loss of viability of a sample that is much more specific and time-efficient than alternative methods. This protocol is particularly useful in scenarios in which sterilization treatments may inactivate organisms but not degrade their DNA. The use of a PCR-based method of pathogen detection without first inactivating the DNA of nonviable cells will potentially lead to false positives. The loss of culturability, by heat-killing, did not prevent amplified PCR products, which supports the use of PMA to prevent amplification and differentiate between viable and dead cells. PMA was shown to inhibit the amplification of DNA by PCR in vegetative cells that had been heat-killed.

  11. Rationally optimized cryopreservation of multiple mouse embryonic stem cell lines: II—Mathematical prediction and experimental validation of optimal cryopreservation protocols☆

    PubMed Central

    Kashuba, Corinna M.; Benson, James D.; Critser, John K.

    2014-01-01

    In Part I, we documented differences in cryopreservation success measured by membrane integrity in four mouse embryonic stem cell (mESC) lines from different genetic backgrounds (BALB/c, CBA, FVB, and 129R1), and we demonstrated a potential biophysical basis for these differences through a comparative study characterizing the membrane permeability characteristics and osmotic tolerance limits of each cell line. Here we use these values to predict optimal cryoprotectants, cooling rates, warming rates, and plunge temperatures. We subsequently verified these predictions experimentally for their effects on post-thaw recovery. From this study, we determined that a cryopreservation protocol utilizing 1 M propylene glycol, a cooling rate of 1 °C/minute, and plunging into liquid nitrogen at −41 °C, combined with subsequent warming in a 22 °C water bath with agitation, significantly improved post-thaw recovery for three of the four mESC lines, and did not diminish post-thaw recovery for our single exception. It is proposed that this protocol can be successfully applied to most mESC lines beyond those included within this study once the effect of propylene glycol on mESC gene expression, growth characteristics, and germ-line transmission has been determined. Mouse ESC lines with poor survival using current standard cryopreservation protocols or our proposed protocol can be optimized on a case-by-case basis using the method we have outlined over two papers. For our single exception, the CBA cell line, a cooling rate of 5 °C/minute in the presence of 1.0 M dimethyl sulfoxide or 1.0 M propylene glycol, combined with plunge temperature of −80 °C was optimal. PMID:24560712

  12. A Study on Market-based Strategic Procurement Planning in Convergent Supply Networks

    NASA Astrophysics Data System (ADS)

    Opadiji, Jayeola Femi; Kaihara, Toshiya

    We present a market-based decentralized approach which uses a market-oriented programming algorithm to obtain Pareto-optimal allocation of resources traded among agents which represent enterprise units in a supply network. The proposed method divides the network into a series of Walrsian markets in order to obtain procurement budgets for enterprises in the network. An interaction protocol based on market value propagation is constructed to coordinate the flow of resources across the network layers. The method mitigates the effect of product complementarity in convergent network by allowing for enterprises to hold private valuations of resources in the markets.

  13. TH-C-18A-08: A Management Tool for CT Dose Monitoring, Analysis, and Protocol Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Chan, F; Newman, B

    2014-06-15

    Purpose: To develop a customizable tool for enterprise-wide managing of CT protocols and analyzing radiation dose information of CT exams for a variety of quality control applications Methods: All clinical CT protocols implemented on the 11 CT scanners at our institution were extracted in digital format. The original protocols had been preset by our CT management team. A commercial CT dose tracking software (DoseWatch,GE healthcare,WI) was used to collect exam information (exam date, patient age etc.), scanning parameters, and radiation doses for all CT exams. We developed a Matlab-based program (MathWorks,MA) with graphic user interface which allows to analyze themore » scanning protocols with the actual dose estimates, and compare the data to national (ACR,AAPM) and internal reference values for CT quality control. Results: The CT protocol review portion of our tool allows the user to look up the scanning and image reconstruction parameters of any protocol on any of the installed CT systems among about 120 protocols per scanner. In the dose analysis tool, dose information of all CT exams (from 05/2013 to 02/2014) was stratified on a protocol level, and within a protocol down to series level, i.e. each individual exposure event. This allows numerical and graphical review of dose information of any combination of scanner models, protocols and series. The key functions of the tool include: statistics of CTDI, DLP and SSDE, dose monitoring using user-set CTDI/DLP/SSDE thresholds, look-up of any CT exam dose data, and CT protocol review. Conclusion: our inhouse CT management tool provides radiologists, technologists and administration a first-hand near real-time enterprise-wide knowledge on CT dose levels of different exam types. Medical physicists use this tool to manage CT protocols, compare and optimize dose levels across different scanner models. It provides technologists feedback on CT scanning operation, and knowledge on important dose baselines and thresholds.« less

  14. Optimization and comparison of simultaneous and separate acquisition protocols for dual isotope myocardial perfusion SPECT

    PubMed Central

    Ghaly, Michael; Links, Jonathan M; Frey, Eric C

    2015-01-01

    Dual-isotope simultaneous-acquisition (DISA) rest-stress myocardial perfusion SPECT (MPS) protocols offer a number of advantages over separate acquisition. However, crosstalk contamination due to scatter in the patient and interactions in the collimator degrade image quality. Compensation can reduce the effects of crosstalk, but does not entirely eliminate image degradations. Optimizing acquisition parameters could further reduce the impact of crosstalk. In this paper we investigate the optimization of the rest Tl-201 energy window width and relative injected activities using the ideal observer (IO), a realistic digital phantom population and Monte Carlo (MC) simulated Tc-99m and Tl-201 projections as a means to improve image quality. We compared performance on a perfusion defect detection task for Tl-201 acquisition energy window widths varying from 4 to 40 keV centered at 72 keV for a camera with a 9% energy resolution. We also investigated 7 different relative injected activities, defined as the ratio of Tc-99m and Tl-201 activities, while keeping the total effective dose constant at 13.5 mSv. For each energy window and relative injected activity, we computed the IO test statistics using a Markov chain Monte Carlo (MCMC) method for an ensemble of 1,620 triplets of fixed and reversible defect-present, and defect-absent noisy images modeling realistic background variations. The volume under the 3-class receiver operating characteristic (ROC) surface (VUS) was estimated and served as the figure of merit. For simultaneous acquisition, the IO suggested that relative Tc-to-Tl injected activity ratios of 2.6–5 and acquisition energy window widths of 16–22% were optimal. For separate acquisition, we observed a broad range of optimal relative injected activities from 2.6 to 12.1 and acquisition energy window of widths 16–22%. A negative correlation between Tl-201 injected activity and the width of the Tl-201 energy window was observed in these ranges. The results also suggested that DISA methods could potentially provide image quality as good as that obtained with separate acquisition protocols. We compared observer performance for the optimized protocols and the current clinical protocol using separate acquisition. The current clinical protocols provided better performance at a cost of injecting the patient with approximately double the injected activity of Tc-99m and Tl-201, resulting in substantially increased radiation dose. PMID:26083239

  15. Optimizing the MAC Protocol in Localization Systems Based on IEEE 802.15.4 Networks

    PubMed Central

    Claver, Jose M.; Ezpeleta, Santiago

    2017-01-01

    Radio frequency signals are commonly used in the development of indoor localization systems. The infrastructure of these systems includes some beacons placed at known positions that exchange radio packets with users to be located. When the system is implemented using wireless sensor networks, the wireless transceivers integrated in the network motes are usually based on the IEEE 802.15.4 standard. But, the CSMA-CA, which is the basis for the medium access protocols in this category of communication systems, is not suitable when several users want to exchange bursts of radio packets with the same beacon to acquire the radio signal strength indicator (RSSI) values needed in the location process. Therefore, new protocols are necessary to avoid the packet collisions that appear when multiple users try to communicate with the same beacons. On the other hand, the RSSI sampling process should be carried out very quickly because some systems cannot tolerate a large delay in the location process. This is even more important when the RSSI sampling process includes measures with different signal power levels or frequency channels. The principal objective of this work is to speed up the RSSI sampling process in indoor localization systems. To achieve this objective, the main contribution is the proposal of a new MAC protocol that eliminates the medium access contention periods and decreases the number of packet collisions to accelerate the RSSI collection process. Moreover, the protocol increases the overall network throughput taking advantage of the frequency channel diversity. The presented results show the suitability of this protocol for reducing the RSSI gathering delay and increasing the network throughput in simulated and real environments. PMID:28684666

  16. Evaluation of telomere length in human cardiac tissues using cardiac quantitative FISH.

    PubMed

    Sharifi-Sanjani, Maryam; Meeker, Alan K; Mourkioti, Foteini

    2017-09-01

    Telomere length has been correlated with various diseases, including cardiovascular disease and cancer. The use of currently available telomere-length measurement techniques is often restricted by the requirement of a large amount of cells (Southern-based techniques) or the lack of information on individual cells or telomeres (PCR-based methods). Although several methods have been used to measure telomere length in tissues as a whole, the assessment of cell-type-specific telomere length provides valuable information on individual cell types. The development of fluorescence in situ hybridization (FISH) technologies enables the quantification of telomeres in individual chromosomes, but the use of these methods is dependent on the availability of isolated cells, which prevents their use with fixed archival samples. Here we describe an optimized quantitative FISH (Q-FISH) protocol for measuring telomere length that bypasses the previous limitations by avoiding contributions from undesired cell types. We have used this protocol on small paraffin-embedded cardiac-tissue samples. This protocol describes step-by-step procedures for tissue preparation, permeabilization, cardiac-tissue pretreatment and hybridization with a Cy3-labeled telomeric repeat complementing (CCCTAA) 3 peptide nucleic acid (PNA) probe coupled with cardiac-specific antibody staining. We also describe how to quantify telomere length by means of the fluorescence intensity and area of each telomere within individual nuclei. This protocol provides comparative cell-type-specific telomere-length measurements in relatively small human cardiac samples and offers an attractive technique to test hypotheses implicating telomere length in various cardiac pathologies. The current protocol (from tissue collection to image procurement) takes ∼28 h along with three overnight incubations. We anticipate that the protocol could be easily adapted for use on different tissue types.

  17. Optimizing the MAC Protocol in Localization Systems Based on IEEE 802.15.4 Networks.

    PubMed

    Pérez-Solano, Juan J; Claver, Jose M; Ezpeleta, Santiago

    2017-07-06

    Radio frequency signals are commonly used in the development of indoor localization systems. The infrastructure of these systems includes some beacons placed at known positions that exchange radio packets with users to be located. When the system is implemented using wireless sensor networks, the wireless transceivers integrated in the network motes are usually based on the IEEE 802.15.4 standard. But, the CSMA-CA, which is the basis for the medium access protocols in this category of communication systems, is not suitable when several users want to exchange bursts of radio packets with the same beacon to acquire the radio signal strength indicator (RSSI) values needed in the location process. Therefore, new protocols are necessary to avoid the packet collisions that appear when multiple users try to communicate with the same beacons. On the other hand, the RSSI sampling process should be carried out very quickly because some systems cannot tolerate a large delay in the location process. This is even more important when the RSSI sampling process includes measures with different signal power levels or frequency channels. The principal objective of this work is to speed up the RSSI sampling process in indoor localization systems. To achieve this objective, the main contribution is the proposal of a new MAC protocol that eliminates the medium access contention periods and decreases the number of packet collisions to accelerate the RSSI collection process. Moreover, the protocol increases the overall network throughput taking advantage of the frequency channel diversity. The presented results show the suitability of this protocol for reducing the RSSI gathering delay and increasing the network throughput in simulated and real environments.

  18. Silacyclobutane-based diblock copolymers with vinylferrocene, ferrocenylmethyl methacrylate, and [1]dimethylsilaferrocenophane.

    PubMed

    Gallei, Markus; Tockner, Stefan; Klein, Roland; Rehahn, Matthias

    2010-05-12

    Well-defined diblock copolymers have been prepared in which three different ferrocene-based monomers are combined with 1,1-dimethylsilacyclobutane (DMSB) and 1-methylsilacyclobutane, respectively, as their carbosilane counterparts. Optimized procedures are reported for the living anionic chain growth following sequential monomer addition protocols, ensuring narrow polydispersities and high blocking efficiencies. The DMSB-containing copolymers show phase segregation in the bulk state, leading to micromorphologies composed of crystalline DMSB phases and amorphous polymetallocene phases. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Multiphase contrast medium injection for optimization of computed tomographic coronary angiography.

    PubMed

    Budoff, Matthew Jay; Shinbane, Jerold S; Child, Janis; Carson, Sivi; Chau, Alex; Liu, Stephen H; Mao, SongShou

    2006-02-01

    Electron beam angiography is a minimally invasive imaging technique. Adequate vascular opacification throughout the study remains a critical issue for image quality. We hypothesized that vascular image opacification and uniformity of vascular enhancement between slices can be improved using multiphase contrast medium injection protocols. We enrolled 244 consecutive patients who were randomized to three different injection protocols: single-phase contrast medium injection (Group 1), dual-phase contrast medium injection with each phase at a different injection rate (Group 2), and a three-phase injection with two phases of contrast medium injection followed by a saline injection phase (Group 3). Parameters measured were aortic opacification based on Hounsfield units and uniformity of aortic enhancement at predetermined slices (locations from top [level 1] to base [level 60]). In Group 1, contrast opacification differed across seven predetermined locations (scan levels: 1st versus 60th, P < .05), demonstrating significant nonuniformity. In Group 2, there was more uniform vascular enhancement, with no significant differences between the first 50 slices (P > .05). In Group 3, there was greater uniformity of vascular enhancement and higher mean Hounsfield units value across all 60 images, from the aortic root to the base of the heart (P < .05). The three-phase injection protocol improved vascular opacification at the base of the heart, as well as uniformity of arterial enhancement throughout the study.

  20. A real-time polymerase chain reaction-based protocol for low/medium-throughput Y-chromosome microdeletions analysis.

    PubMed

    Segat, Ludovica; Padovan, Lara; Doc, Darja; Petix, Vincenzo; Morgutti, Marcello; Crovella, Sergio; Ricci, Giuseppe

    2012-12-01

    We describe a real-time polymerase chain reaction (PCR) protocol based on the fluorescent molecule SYBR Green chemistry, for a low- to medium-throughput analysis of Y-chromosome microdeletions, optimized according to the European guidelines and aimed at making the protocol faster, avoiding post-PCR processing, and simplifying the results interpretation. We screened 156 men from the Assisted Reproduction Unit, Department of Obstetrics and Gynecology, Institute for Maternal and Child Health IRCCS Burlo Garofolo (Trieste, Italy), 150 not presenting Y-chromosome microdeletion, and 6 with microdeletions in different azoospermic factor (AZF) regions. For each sample, the Zinc finger Y-chromosomal protein (ZFY), sex-determining region Y (SRY), sY84, sY86, sY127, sY134, sY254, and sY255 loci were analyzed by performing one reaction for each locus. AZF microdeletions were successfully detected in six individuals, confirming the results obtained with commercial kits. Our real-time PCR protocol proved to be a rapid, safe, and relatively cheap method that was suitable for a low- to medium-throughput diagnosis of Y-chromosome microdeletion, which allows an analysis of approximately 10 samples (with the addition of positive and negative controls) in a 96-well plate format, or approximately 46 samples in a 384-well plate for all markers simultaneously, in less than 2 h without the need of post-PCR manipulation.

  1. Clinical outcomes and mortality before and after implementation of a pediatric sepsis protocol in a limited resource setting: A retrospective cohort study in Bangladesh.

    PubMed

    Kortz, Teresa Bleakly; Axelrod, David M; Chisti, Mohammod J; Kache, Saraswati

    2017-01-01

    Pediatric sepsis has a high mortality rate in limited resource settings. Sepsis protocols have been shown to be a cost-effective strategy to improve morbidity and mortality in a variety of populations and settings. At Dhaka Hospital in Bangladesh, mortality from pediatric sepsis in high-risk children previously approached 60%, which prompted the implementation of an evidenced-based protocol in 2010. The clinical effectiveness of this protocol had not been measured. We hypothesized that implementation of a pediatric sepsis protocol improved clinical outcomes, including reducing mortality and length of hospital stay. This was a retrospective cohort study of children 1-59 months old with a diagnosis of sepsis, severe sepsis or septic shock admitted to Dhaka Hospital from 10/25/2009-10/25/2011. The primary outcome was inpatient mortality pre- and post-protocol implementation. Secondary outcomes included fluid overload, heart failure, respiratory insufficiency, length of hospital stay, and protocol compliance, as measured by antibiotic and fluid bolus administration within 60 minutes of hospital presentation. 404 patients were identified by a key-word search of the electronic medical record; 328 patients with a primary diagnosis of sepsis, severe sepsis, or septic shock were included (143 pre- and185 post-protocol) in the analysis. Pre- and post-protocol mortality were similar and not statistically significant (32.17% vs. 34.59%, p = 0.72). The adjusted odds ratio (AOR) for post-protocol mortality was 1.55 (95% CI, 0.88-2.71). The odds for developing fluid overload were significantly higher post-protocol (AOR 3.45, 95% CI, 2.04-5.85), as were the odds of developing heart failure (AOR 4.52, 95% CI, 1.43-14.29) and having a longer median length of stay (AOR 1.81, 95% CI 1.10-2.96). There was no statistically significant difference in respiratory insufficiency (pre- 65.7% vs. post- 70.3%, p = 0.4) or antibiotic administration between the cohorts (pre- 16.08% vs. post- 12.43%, p = 0.42). Implementation of a pediatric sepsis protocol did not improve all-cause mortality or length of stay and may have been associated with increased fluid overload and heart failure during the study period in a large, non-governmental hospital in Bangladesh. Similar rates of early antibiotic administration may indicate poor protocol compliance. Though evidenced-based protocols are a potential cost-effective strategy to improve outcomes, future studies should focus on optimal implementation of context-relevant sepsis protocols in limited resource settings.

  2. Enhancing Self-Efficacy for Help-Seeking Among Transition-Aged Youth in Postsecondary Settings With Mental Health and/or Substance Use Concerns, Using Crowd-Sourced Online and Mobile Technologies: The Thought Spot Protocol

    PubMed Central

    Abi-Jaoude, Alexxa; Johnson, Andrew; Ferguson, Genevieve; Sanches, Marcos; Levinson, Andrea; Robb, Janine; Heffernan, Olivia; Herzog, Tyson; Chaim, Gloria; Cleverley, Kristin; Eysenbach, Gunther; Henderson, Joanna; S Hoch, Jeffrey; Hollenberg, Elisa; Jiang, Huan; Isaranuwatchai, Wanrudee; Law, Marcus; Sharpe, Sarah; Tripp, Tim; Voineskos, Aristotle

    2016-01-01

    Background Seventy percent of lifetime cases of mental illness emerge prior to age 24. While early detection and intervention can address approximately 70% of child and youth cases of mental health concerns, the majority of youth with mental health concerns do not receive the services they need. Objective The objective of this paper is to describe the protocol for optimizing and evaluating Thought Spot, a Web- and mobile-based platform cocreated with end users that is designed to improve the ability of students to access mental health and substance use services. Methods This project will be conducted in 2 distinct phases, which will aim to (1) optimize the existing Thought Spot electronic health/mobile health intervention through youth engagement, and (2) evaluate the impact of Thought Spot on self-efficacy for mental health help-seeking and health literacy among university and college students. Phase 1 will utilize participatory action research and participatory design research to cocreate and coproduce solutions with members of our target audience. Phase 2 will consist of a randomized controlled trial to test the hypothesis that the Thought Spot intervention will show improvements in intentions for, and self-efficacy in, help-seeking for mental health concerns. Results We anticipate that enhancements will include (1) user analytics and feedback mechanisms, (2) peer mentorship and/or coaching functionality, (3) crowd-sourcing and data hygiene, and (4) integration of evidence-based consumer health and research information. Conclusions This protocol outlines the important next steps in understanding the impact of the Thought Spot platform on the behavior of postsecondary, transition-aged youth students when they seek information and services related to mental health and substance use. PMID:27815232

  3. Translating evidence-based protocol of wound drain management for total joint arthroplasty into practice: A quasi-experimental study.

    PubMed

    Tsang, Lap Fung; Cheng, Hang Cheong; Ho, Hon Shuen; Hsu, Yung Chak; Chow, Chiu Man; Law, Heung Wah; Fong, Lup Chau; Leung, Lok Ming; Kong, Ivy Ching Yan; Chan, Chi Wai; Sham, Alice So Yuen

    2016-05-01

    Although various drains have long been used in total joint replacement, evidence suggests inconsistent practice exists in the use of drainage systems including intermittently applying suction or free of drainage suction, and variations in the optimal timing for wound drain removal. A comprehensive systematic review of available evidence up to 2013 was conducted in a previous study and a protocol was adapted for clinical application according to the summary of the retrieved information (Tsang, 2015). To determine if the protocol could reduce blood loss and blood transfusion after operation and to develop a record form so as to enhance communication of drainage record amongst surgeons and nurses. A quasi-experimental time-series design was undertaken. In the conventional group, surgeons ordered free drainage if the drain output was more than 300 ml. The time of removal of the drain was based on their professional judgement. In the protocol group the method of drainage was dependant of the drainage output as was the timing of the removal of the drain. A standardized record form was developed to guide operating room and orthopaedic ward nurses to manage the drainage system. The drain was removed significantly earlier in the protocol group. Blood loss rate at the first hour of post-operation was extremely low in the protocol group due to clamping effect. Blood loss in volume during the first three hours in the protocol group was significantly lower than that in the conventional group. Only in 11.1% and 4% of cases was it necessary to clamp at the three and four hour post-operative hours. No clamping was required at the two and eight hour postoperative period. There was no significant difference in blood loss during the removal of the drain and during blood transfusion, which was required for patients upon removal of the drain in the two groups. This is the first clinical study to develop an evidence-based protocol to manage wound drain effectively in Hong Kong. Total blood loss and blood transfusions were not significantly different between the conventional and protocol groups. A standard documentation document is beneficial to enhance communication between doctors and nurses as well as to monitor and observe drainage effectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Near-optimality of special periodic protocols for fluid models of single server switched networks with switchover times

    NASA Astrophysics Data System (ADS)

    Matveev, A. S.; Ishchenko, R.

    2017-11-01

    We consider a generic deterministic time-invariant fluid model of a single server switched network, which consists of finitely many infinite size buffers (queues) and receives constant rate inflows of jobs from the outside. Any flow undergoes a multi-phase service, entering a specific buffer after every phase, and ultimately leaves the network; the route of the flow over the buffers is pre-specified, and flows may merge inside the network. They share a common source of service, which can serve at most one buffer at a time and has to switch among buffers from time to time; any switch consumes a nonzero switchover period. With respect to the long-run maximal scaled wip (work in progress) performance metric, near-optimality of periodic scheduling and service protocols is established: the deepest optimum (that is over all feasible processes in the network, irrespective of the initial state) is furnished by such a protocol up to as small error as desired. Moreover, this can be achieved with a special periodic protocol introduced in the paper. It is also shown that the exhaustive policy is optimal for any buffer whose service at the maximal rate does not cause growth of the scaled wip.

  5. Penicillin allergy: optimizing diagnostic protocols, public health implications, and future research needs.

    PubMed

    Macy, Eric

    2015-08-01

    Unverified penicillin allergy is being increasingly recognized as a public health concern. The ideal protocol for verifying true clinically significant IgE-mediated penicillin allergy needs to use only commercially available materials, be well tolerated and easy to perform in both the inpatient and outpatient settings, and minimize false-positive determinations. This review concentrates on articles published in 2013 and 2014 that present new data relating to the diagnosis and management of penicillin allergy. Penicillin allergy can be safely evaluated at this time, in patients with an appropriate clinical history of penicillin allergy, using only penicilloyl-poly-lysine and native penicillin G as skin test reagents, if an oral challenge with amoxicillin 250 mg, followed by 1 h of observation, is given to all skin test negative individuals. Millions of individuals falsely labeled with penicillin allergy need to be evaluated to safely allow them to use penicillin-class antibiotics and avoid morbidity associated with penicillin avoidance. Further research is needed to determine optimal protocol(s). There will still be a 1-2% rate of adverse reactions reported with all future therapeutic penicillin-class antibiotic use, even with optimal methods used to determine acute penicillin tolerance. Only a small minority of these new reactions will be IgE-mediated.

  6. A Geographical Heuristic Routing Protocol for VANETs

    PubMed Central

    Urquiza-Aguiar, Luis; Tripp-Barba, Carolina; Aguilar Igartua, Mónica

    2016-01-01

    Vehicular ad hoc networks (VANETs) leverage the communication system of Intelligent Transportation Systems (ITS). Recently, Delay-Tolerant Network (DTN) routing protocols have increased their popularity among the research community for being used in non-safety VANET applications and services like traffic reporting. Vehicular DTN protocols use geographical and local information to make forwarding decisions. However, current proposals only consider the selection of the best candidate based on a local-search. In this paper, we propose a generic Geographical Heuristic Routing (GHR) protocol that can be applied to any DTN geographical routing protocol that makes forwarding decisions hop by hop. GHR includes in its operation adaptations simulated annealing and Tabu-search meta-heuristics, which have largely been used to improve local-search results in discrete optimization. We include a complete performance evaluation of GHR in a multi-hop VANET simulation scenario for a reporting service. Our study analyzes all of the meaningful configurations of GHR and offers a statistical analysis of our findings by means of MANOVA tests. Our results indicate that the use of a Tabu list contributes to improving the packet delivery ratio by around 5% to 10%. Moreover, if Tabu is used, then the simulated annealing routing strategy gets a better performance than the selection of the best node used with carry and forwarding (default operation). PMID:27669254

  7. A Geographical Heuristic Routing Protocol for VANETs.

    PubMed

    Urquiza-Aguiar, Luis; Tripp-Barba, Carolina; Aguilar Igartua, Mónica

    2016-09-23

    Vehicular ad hoc networks (VANETs) leverage the communication system of Intelligent Transportation Systems (ITS). Recently, Delay-Tolerant Network (DTN) routing protocols have increased their popularity among the research community for being used in non-safety VANET applications and services like traffic reporting. Vehicular DTN protocols use geographical and local information to make forwarding decisions. However, current proposals only consider the selection of the best candidate based on a local-search. In this paper, we propose a generic Geographical Heuristic Routing (GHR) protocol that can be applied to any DTN geographical routing protocol that makes forwarding decisions hop by hop. GHR includes in its operation adaptations simulated annealing and Tabu-search meta-heuristics, which have largely been used to improve local-search results in discrete optimization. We include a complete performance evaluation of GHR in a multi-hop VANET simulation scenario for a reporting service. Our study analyzes all of the meaningful configurations of GHR and offers a statistical analysis of our findings by means of MANOVA tests. Our results indicate that the use of a Tabu list contributes to improving the packet delivery ratio by around 5% to 10%. Moreover, if Tabu is used, then the simulated annealing routing strategy gets a better performance than the selection of the best node used with carry and forwarding (default operation).

  8. Cryotherapy for acute ankle sprains: a randomised controlled study of two different icing protocols.

    PubMed

    Bleakley, C M; McDonough, S M; MacAuley, D C; Bjordal, J

    2006-08-01

    The use of cryotherapy in the management of acute soft tissue injury is largely based on anecdotal evidence. Preliminary evidence suggests that intermittent cryotherapy applications are most effective at reducing tissue temperature to optimal therapeutic levels. However, its efficacy in treating injured human subjects is not yet known. To compare the efficacy of an intermittent cryotherapy treatment protocol with a standard cryotherapy treatment protocol in the management of acute ankle sprains. Sportsmen (n = 44) and members of the general public (n = 45) with mild/moderate acute ankle sprains. Subjects were randomly allocated, under strictly controlled double blind conditions, to one of two treatment groups: standard ice application (n = 46) or intermittent ice application (n = 43). The mode of cryotherapy was standardised across groups and consisted of melting iced water (0 degrees C) in a standardised pack. Function, pain, and swelling were recorded at baseline and one, two, three, four, and six weeks after injury. Subjects treated with the intermittent protocol had significantly (p<0.05) less ankle pain on activity than those using a standard 20 minute protocol; however, one week after ankle injury, there were no significant differences between groups in terms of function, swelling, or pain at rest. Intermittent applications may enhance the therapeutic effect of ice in pain relief after acute soft tissue injury.

  9. Developing the Fourth Evaluation Dimension: A Protocol for Evaluation of Video From the Patient's Perspective During Major Incident Exercises.

    PubMed

    Haverkort, J J Mark; Leenen, Luke P H

    2017-10-01

    Presently used evaluation techniques rely on 3 traditional dimensions: reports from observers, registration system data, and observational cameras. Some of these techniques are observer-dependent and are not reproducible for a second review. This proof-of-concept study aimed to test the feasibility of extending evaluation to a fourth dimension, the patient's perspective. Footage was obtained during a large, full-scale hospital trauma drill. Two mock victims were equipped with point-of-view cameras filming from the patient's head. Based on the Major Incident Hospital's first experience during the drill, a protocol was developed for a prospective, standardized method to evaluate a hospital's major incident response from the patient's perspective. The protocol was then tested in a second drill for its feasibility. New insights were gained after review of the footage. The traditional observer missed some of the evaluation points, which were seen on the point-of-view cameras. The information gained from the patient's perspective proved to be implementable into the designed protocol. Use of point-of-view camera recordings from a mock patient's perspective is a valuable addition to traditional evaluation of trauma drills and trauma care. Protocols should be designed to optimize and objectify judgement of such footage. (Disaster Med Public Health Preparedness. 2017;11:594-599).

  10. Measurement of drug-target engagement in live cells by two-photon fluorescence anisotropy imaging.

    PubMed

    Vinegoni, Claudio; Fumene Feruglio, Paolo; Brand, Christian; Lee, Sungon; Nibbs, Antoinette E; Stapleton, Shawn; Shah, Sunil; Gryczynski, Ignacy; Reiner, Thomas; Mazitschek, Ralph; Weissleder, Ralph

    2017-07-01

    The ability to directly image and quantify drug-target engagement and drug distribution with subcellular resolution in live cells and whole organisms is a prerequisite to establishing accurate models of the kinetics and dynamics of drug action. Such methods would thus have far-reaching applications in drug development and molecular pharmacology. We recently presented one such technique based on fluorescence anisotropy, a spectroscopic method based on polarization light analysis and capable of measuring the binding interaction between molecules. Our technique allows the direct characterization of target engagement of fluorescently labeled drugs, using fluorophores with a fluorescence lifetime larger than the rotational correlation of the bound complex. Here we describe an optimized protocol for simultaneous dual-channel two-photon fluorescence anisotropy microscopy acquisition to perform drug-target measurements. We also provide the necessary software to implement stream processing to visualize images and to calculate quantitative parameters. The assembly and characterization part of the protocol can be implemented in 1 d. Sample preparation, characterization and imaging of drug binding can be completed in 2 d. Although currently adapted to an Olympus FV1000MPE microscope, the protocol can be extended to other commercial or custom-built microscopes.

  11. Field Demonstration, Optimization, and Rigorous Validation of Peroxygen-Based ISCO for the Remediation of Contaminated Groundwater - CHP Stabilization Protocol

    DTIC Science & Technology

    2014-05-01

    propagations CoCs Contaminants of concern GC Gas chromatography DNAPL Dense nonaqueous phase liquid ISCO In situ chemical oxidation HCA...used for the design and scale-up of air strippers, ion exchange systems, precipitation reactors , and many other treatment processes. Such treatability...studies provide definitive data on system dimensions and reagent dosages using linear or non -linear scale-up. Designing these processes without the

  12. Configuration of Wireless Cooperative/Sensor Networks

    DTIC Science & Technology

    2008-05-25

    WSN), the advantages of cooperation can be further exploited by optimally allocating the energy and bandwidth resources among users based on the... consumption and extend system lifetime [Sin98]. The implementation of a minimum energy routing protocol is discussed in [Dos02a, Dos02b]. An online...power consumption in the network given the required SER at the destination. For example, with source power Ps=20dB, the EP algorithm requires one relay

  13. Efficient protocols for Stirling heat engines at the micro-scale

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo; Schwieger, Kay

    2015-10-01

    We investigate the thermodynamic efficiency of sub-micro-scale Stirling heat engines operating under the conditions described by overdamped stochastic thermodynamics. We show how to construct optimal protocols such that at maximum power the efficiency attains for constant isotropic mobility the universal law η=2 ηC/(4-ηC) , where ηC is the efficiency of an ideal Carnot cycle. We show that these protocols are specified by the solution of an optimal mass transport problem. Such solution can be determined explicitly using well-known Monge-Ampère-Kantorovich reconstruction algorithms. Furthermore, we show that the same law describes the efficiency of heat engines operating at maximum work over short time periods. Finally, we illustrate the straightforward extension of these results to cases when the mobility is anisotropic and temperature dependent.

  14. Feeding Protocols for Neonates With Hypoplastic Left Heart Syndrome: A Review.

    PubMed

    Jenkins, Erin

    2015-01-01

    Optimizing nutrition in neonates with hypoplastic left heart syndrome is essential, given the high rate of growth failure in this population. Infants with hypoplastic left heart syndrome are predisposed to nutritional deficiency as a result of their increased metabolic demand; however, early enteral feeding also increases the risk of serious gastrointestinal morbidity and mortality caused by poor intestinal perfusion. Consequently, providers have difficulty deciding when and how to safely feed these patients. A review of the literature found that implementation of a structured enteral feeding protocol may decrease the risk of gastrointestinal complications while also minimizing dependence on parenteral nutrition and decreasing length of hospital stay. As these studies were limited, further research is warranted to establish a best practice feeding protocol to decrease risk and optimize nutrition in this fragile population.

  15. Game-theoretic perspective of Ping-Pong protocol

    NASA Astrophysics Data System (ADS)

    Kaur, Hargeet; Kumar, Atul

    2018-01-01

    We analyse Ping-Pong protocol from the point of view of a game. The analysis helps us in understanding the different strategies of a sender and an eavesdropper to gain the maximum payoff in the game. The study presented here characterizes strategies that lead to different Nash equilibriums. We further demonstrate the condition for Pareto optimality depending on the parameters used in the game. Moreover, we also analysed LM05 protocol and compared it with PP protocol from the point of view of a generic two-way QKD game with or without entanglement. Our results provide a deeper understanding of general two-way QKD protocols in terms of the security and payoffs of different stakeholders in the protocol.

  16. Optimizing the high-resolution manometry (HRM) study protocol.

    PubMed

    Patel, A; Ding, A; Mirza, F; Gyawali, C P

    2015-02-01

    Intolerance of the esophageal manometry catheter may prolong high-resolution manometry (HRM) studies and increase patient distress. We assessed the impact of obtaining the landmark phase at the end of the study when the patient has acclimatized to the HRM catheter. 366 patients (mean age 55.4 ± 0.8 years, 62.0% female) undergoing esophageal HRM over a 1-year period were studied. The standard protocol consisted of the landmark phase, 10 5 mL water swallows 20-30 s apart, and multiple rapid swallows where 4-6 2 mL swallows were administered in rapid succession. The modified protocol consisted of the landmark phase at the end of the study after test swallows. Study duration, technical characteristics, indications, and motor findings were compared between standard and modified protocols. Of the 366 patients, 89.6% underwent the standard protocol (study duration 12.9 ± 0.3 min). In 10.4% with poor catheter tolerance undergoing the modified protocol, study duration was significantly longer (15.6 ± 1.0 min, p = 0.004) despite similar duration of study maneuvers. Only elevated upper esophageal sphincter basal pressures at the beginning of the study segregated modified protocol patients. The 95th percentile time to landmark phase in the standard protocol patients was 6.1 min; as many as 31.4% of modified protocol patients could not obtain their first study maneuver within this period (p = 0.0003). Interpretation was not impacted by shifting the landmark phase to the end of the study. Modification of the HRM study protocol with the landmark phase obtained at the end of the study optimizes study duration without compromising quality. © 2014 John Wiley & Sons Ltd.

  17. An Experiment of GMPLS-Based Dispersion Compensation Control over In-Field Fibers

    NASA Astrophysics Data System (ADS)

    Seno, Shoichiro; Horiuchi, Eiichi; Yoshida, Sota; Sugihara, Takashi; Onohara, Kiyoshi; Kamei, Misato; Baba, Yoshimasa; Kubo, Kazuo; Mizuochi, Takashi

    As ROADMs (Reconfigurable Optical Add/Drop Multiplexers) are becoming widely used in metro/core networks, distributed control of wavelength paths by extended GMPLS (Generalized MultiProtocol Label Switching) protocols has attracted much attention. For the automatic establishment of an arbitrary wavelength path satisfying dynamic traffic demands over a ROADM or WXC (Wavelength Cross Connect)-based network, precise determination of chromatic dispersion over the path and optimized assignment of dispersion compensation capabilities at related nodes are essential. This paper reports an experiment over in-field fibers where GMPLS-based control was applied for the automatic discovery of chromatic dispersion, path computation, and wavelength path establishment with dynamic adjustment of variable dispersion compensation. The GMPLS-based control scheme, which the authors called GMPLS-Plus, extended GMPLS's distributed control architecture with attributes for automatic discovery, advertisement, and signaling of chromatic dispersion. In this experiment, wavelength paths with distances of 24km and 360km were successfully established and error-free data transmission was verified. The experiment also confirmed path restoration with dynamic compensation adjustment upon fiber failure.

  18. Efficient Mobility Management Signalling in Network Mobility Supported PMIPV6

    PubMed Central

    Jebaseeli Samuelraj, Ananthi; Jayapal, Sundararajan

    2015-01-01

    Proxy Mobile IPV6 (PMIPV6) is a network based mobility management protocol which supports node's mobility without the contribution from the respective mobile node. PMIPV6 is initially designed to support individual node mobility and it should be enhanced to support mobile network movement. NEMO-BSP is an existing protocol to support network mobility (NEMO) in PMIPV6 network. Due to the underlying differences in basic protocols, NEMO-BSP cannot be directly applied to PMIPV6 network. Mobility management signaling and data structures used for individual node's mobility should be modified to support group nodes' mobility management efficiently. Though a lot of research work is in progress to implement mobile network movement in PMIPV6, it is not yet standardized and each suffers with different shortcomings. This research work proposes modifications in NEMO-BSP and PMIPV6 to achieve NEMO support in PMIPV6. It mainly concentrates on optimizing the number and size of mobility signaling exchanged while mobile network or mobile network node changes its access point. PMID:26366431

  19. Using RSAT oligo-analysis and dyad-analysis tools to discover regulatory signals in nucleic sequences.

    PubMed

    Defrance, Matthieu; Janky, Rekin's; Sand, Olivier; van Helden, Jacques

    2008-01-01

    This protocol explains how to discover functional signals in genomic sequences by detecting over- or under-represented oligonucleotides (words) or spaced pairs thereof (dyads) with the Regulatory Sequence Analysis Tools (http://rsat.ulb.ac.be/rsat/). Two typical applications are presented: (i) predicting transcription factor-binding motifs in promoters of coregulated genes and (ii) discovering phylogenetic footprints in promoters of orthologous genes. The steps of this protocol include purging genomic sequences to discard redundant fragments, discovering over-represented patterns and assembling them to obtain degenerate motifs, scanning sequences and drawing feature maps. The main strength of the method is its statistical ground: the binomial significance provides an efficient control on the rate of false positives. In contrast with optimization-based pattern discovery algorithms, the method supports the detection of under- as well as over-represented motifs. Computation times vary from seconds (gene clusters) to minutes (whole genomes). The execution of the whole protocol should take approximately 1 h.

  20. Predicting thermal history a-priori for magnetic nanoparticle hyperthermia of internal carcinoma

    NASA Astrophysics Data System (ADS)

    Dhar, Purbarun; Sirisha Maganti, Lakshmi

    2017-08-01

    This article proposes a simplistic and realistic method where a direct analytical expression can be derived for the temperature field within a tumour during magnetic nanoparticle hyperthermia. The approximated analytical expression for thermal history within the tumour is derived based on the lumped capacitance approach and considers all therapy protocols and parameters. The present method is simplistic and provides an easy framework for estimating hyperthermia protocol parameters promptly. The model has been validated with respect to several experimental reports on animal models such as mice/rabbit/hamster and human clinical trials. It has been observed that the model is able to accurately estimate the thermal history within the carcinoma during the hyperthermia therapy. The present approach may find implications in a-priori estimation of the thermal history in internal tumours for optimizing magnetic hyperthermia treatment protocols with respect to the ablation time, tumour size, magnetic drug concentration, field strength, field frequency, nanoparticle material and size, tumour location, and so on.

  1. Efficient Mobility Management Signalling in Network Mobility Supported PMIPV6.

    PubMed

    Samuelraj, Ananthi Jebaseeli; Jayapal, Sundararajan

    2015-01-01

    Proxy Mobile IPV6 (PMIPV6) is a network based mobility management protocol which supports node's mobility without the contribution from the respective mobile node. PMIPV6 is initially designed to support individual node mobility and it should be enhanced to support mobile network movement. NEMO-BSP is an existing protocol to support network mobility (NEMO) in PMIPV6 network. Due to the underlying differences in basic protocols, NEMO-BSP cannot be directly applied to PMIPV6 network. Mobility management signaling and data structures used for individual node's mobility should be modified to support group nodes' mobility management efficiently. Though a lot of research work is in progress to implement mobile network movement in PMIPV6, it is not yet standardized and each suffers with different shortcomings. This research work proposes modifications in NEMO-BSP and PMIPV6 to achieve NEMO support in PMIPV6. It mainly concentrates on optimizing the number and size of mobility signaling exchanged while mobile network or mobile network node changes its access point.

  2. An Umeclidinium membrane sensor; Two-step optimization strategy for improved responses.

    PubMed

    Yehia, Ali M; Monir, Hany H

    2017-09-01

    In the scientific context of membrane sensors and improved experimentation, we devised an experimentally designed protocol for sensor optimization. Two-step strategy was implemented for Umeclidinium bromide (UMEC) analysis which is a novel quinuclidine-based muscarinic antagonist used for maintenance treatment of symptoms accompanied with chronic obstructive pulmonary disease. In the first place, membrane components were screened for ideal ion exchanger, ionophore and plasticizer using three categorical factors at three levels in Taguchi design. Secondly, experimentally designed optimization was followed in order to tune the sensor up for finest responses. Twelve experiments were randomly carried out in a continuous factor design. Nernstian response, detection limit and selectivity were assigned as responses in these designs. The optimized membrane sensor contained tetrakis-[3,5-bis(trifluoro- methyl)phenyl] borate (0.44wt%) and calix[6]arene (0.43wt%) in 50.00% PVC plasticized with 49.13wt% 2-ni-tro-phenyl octylether. This sensor, along with an optimum concentration of inner filling solution (2×10 -4 molL -1 UMEC) and 2h of soaking time, attained the design objectives. Nernstian response approached 59.7mV/decade and detection limit decreased by about two order of magnitude (8×10 -8 mol L -1 ) through this optimization protocol. The proposed sensor was validated for UMEC determination in its linear range (3.16×10 -7 -1×10 -3 mol L -1 ) and challenged for selective discrimination of other congeners and inorganic cations. Results of INCRUSE ELLIPTA ® inhalation powder analyses obtained from the proposed sensor and manufacturer's UPLC were statistically compared. Moreover the proposed sensor was successfully used for the determination of UMEC in plasma samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Toward the Standardization of Mycological Examination of Sputum Samples in Cystic Fibrosis: Results from a French Multicenter Prospective Study.

    PubMed

    Coron, Noémie; Pihet, Marc; Fréalle, Emilie; Lemeille, Yolande; Pinel, Claudine; Pelloux, Hervé; Gargala, Gilles; Favennec, Loic; Accoceberry, Isabelle; Durand-Joly, Isabelle; Dalle, Frédéric; Huet, Frédéric; Fanton, Annlyse; Boldron, Amale; Loeuille, Guy-André; Domblides, Philippe; Coltey, Bérengère; Pin, Isabelle; Llerena, Catherine; Troussier, Françoise; Person, Christine; Marguet, Christophe; Wizla, Nathalie; Thumerelle, Caroline; Turck, Dominique; Bui, Stéphanie; Fayon, Michael; Duhamel, Alain; Prévotat, Anne; Wallaert, Benoit; Leroy, Sylvie; Bouchara, Jean-Philippe; Delhaes, Laurence

    2018-02-01

    Fungal respiratory colonization of cystic fibrosis (CF) patients emerges as a new concern; however, the heterogeneity of mycological protocols limits investigations. We first aimed at setting up an efficient standardized protocol for mycological analysis of CF sputa that was assessed during a prospective, multicenter study: "MucoFong" program (PHRC-06/1902). Sputa from 243 CF patients from seven centers in France were collected over a 15-month period and submitted to a standardized protocol based on 6 semi-selective media. After mucolytic pretreatment, sputa were plated in parallel on cycloheximide-enriched (ACT37), erythritol-enriched (ERY37), benomyl dichloran-rose bengal (BENO37) and chromogenic (CAN37) media incubated at 37 °C and on Sabouraud-chloramphenicol (SAB27) and erythritol-enriched (ERY27) media incubated at 20-27 °C. Each plate was checked twice a week during 3 weeks. Fungi were conventionally identified; time for detection of fungal growth was noted for each species. Fungal prevalences and media performances were assessed; an optimal combination of media was determined using the Chi-squared automatic interaction detector method. At least one fungal species was isolated from 81% of sputa. Candida albicans was the most prevalent species (58.8%), followed by Aspergillus fumigatus (35.4%). Cultivation on CAN37, SAB27, ACT37 and ERY27 during 16 days provided an optimal combination, detecting C. albicans, A. fumigatus, Scedosporium apiospermum complex and Exophiala spp. with sensitivities of 96.5, 98.8, 100 and 100%. Combination of these four culture media is recommended to ensure the growth of key fungal pathogens in CF respiratory specimens. The use of such consensual protocol is of major interest for merging results from future epidemiological studies.

  4. Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.

    PubMed

    Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat

    2013-05-21

    Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.

  5. Method optimization for fathead minnow (Pimephales promelas) liver S9 isolation

    EPA Science Inventory

    Standard protocols have been proposed to assess metabolic stability in rainbow trout liver S9 fractions. Using in vitro substrate depletion assays, in vitro intrinsic clearance rates can be calculated for a variety of study compounds. Existing protocols suggest potential adaptati...

  6. Practical considerations for optimizing cardiac computed tomography protocols for comprehensive acquisition prior to transcatheter aortic valve replacement.

    PubMed

    Khalique, Omar K; Pulerwitz, Todd C; Halliburton, Sandra S; Kodali, Susheel K; Hahn, Rebecca T; Nazif, Tamim M; Vahl, Torsten P; George, Isaac; Leon, Martin B; D'Souza, Belinda; Einstein, Andrew J

    2016-01-01

    Transcatheter aortic valve replacement (TAVR) is performed frequently in patients with severe, symptomatic aortic stenosis who are at high risk or inoperable for open surgical aortic valve replacement. Computed tomography angiography (CTA) has become the gold standard imaging modality for pre-TAVR cardiac anatomic and vascular access assessment. Traditionally, cardiac CTA has been most frequently used for assessment of coronary artery stenosis, and scanning protocols have generally been tailored for this purpose. Pre-TAVR CTA has different goals than coronary CTA and the high prevalence of chronic kidney disease in the TAVR patient population creates a particular need to optimize protocols for a reduction in iodinated contrast volume. This document reviews details which allow the physician to tailor CTA examinations to maximize image quality and minimize harm, while factoring in multiple patient and scanner variables which must be considered in customizing a pre-TAVR protocol. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  7. Experimental Optimal Single Qubit Purification in an NMR Quantum Information Processor

    PubMed Central

    Hou, Shi-Yao; Sheng, Yu-Bo; Feng, Guan-Ru; Long, Gui-Lu

    2014-01-01

    High quality single qubits are the building blocks in quantum information processing. But they are vulnerable to environmental noise. To overcome noise, purification techniques, which generate qubits with higher purities from qubits with lower purities, have been proposed. Purifications have attracted much interest and been widely studied. However, the full experimental demonstration of an optimal single qubit purification protocol proposed by Cirac, Ekert and Macchiavello [Phys. Rev. Lett. 82, 4344 (1999), the CEM protocol] more than one and half decades ago, still remains an experimental challenge, as it requires more complicated networks and a higher level of precision controls. In this work, we design an experiment scheme that realizes the CEM protocol with explicit symmetrization of the wave functions. The purification scheme was successfully implemented in a nuclear magnetic resonance quantum information processor. The experiment fully demonstrated the purification protocol, and showed that it is an effective way of protecting qubits against errors and decoherence. PMID:25358758

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com

    Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less

  9. A fiber-based quasi-continuous-wave quantum key distribution system

    PubMed Central

    Shen, Yong; Chen, Yan; Zou, Hongxin; Yuan, Jianmin

    2014-01-01

    We report a fiber-based quasi-continuous-wave (CW) quantum key distribution (QKD) system with continuous variables (CV). This system employs coherent light pulses and time multiplexing to maximally reduce cross talk in the fiber. No-switching detection scheme is adopted to optimize the repetition rate. Information is encoded on the sideband of the pulsed coherent light to fully exploit the continuous wave nature of laser field. With this configuration, high secret key rate can be achieved. For the 50 MHz detected bandwidth in our experiment, when the multidimensional reconciliation protocol is applied, a secret key rate of 187 kb/s can be achieved over 50 km of optical fiber against collective attacks, which have been shown to be asymptotically optimal. Moreover, recently studied loopholes have been fixed in our system. PMID:24691409

  10. Faithful Remote Information Concentration Based on the Optimal Universal 1→2 Telecloning of Arbitrary Two-Qubit States

    NASA Astrophysics Data System (ADS)

    Peng, Jia-Yin; Lei, Hong-Xuan; Mo, Zhi-Wen

    2014-05-01

    The previous protocols of remote quantum information concentration were focused on the reverse process of quantum telecloning of single-qubit states. We here investigate the reverse process of optimal universal 1→2 telecloning of arbitrary two-qubit states. The aim of this telecloning is to distribute respectively the quantum information to two groups of spatially separated receivers from a group of two senders situated at two different locations. Our scheme shows that the distributed quantum information can be remotely concentrated back to a group of two different receivers with 1 of probability by utilizing maximally four-particle cluster state and four-particle GHZ state as quantum channel.

  11. Optimization of image quality in pulmonary CT angiography with low dose of contrast material

    NASA Astrophysics Data System (ADS)

    Assi, Abed Al Nasser; Abu Arra, Ali

    2017-06-01

    Aim: The aim of this study was to compare objective image quality data for patient pulmonary embolism between a conventional pulmonary CTA protocol with respect to a novel acquisition protocol performed with optimize radiation dose and less amount of iodinated contrast medium injected to the patients during PE scanning. Materials and Methods: Sixty- four patients with Pulmonary Embolism (PE) possibility, were examined using angio-CT protocol. Patients were randomly assigned to two groups: A (16 women and 16 men, with age ranging from 19-89 years) mean age, 62 years with standard deviation 16; range, 19-89 years) - injected contrast agent: 35-40 ml. B (16 women and 16 men, with age ranging from 28-86 years) - injected contrast agent: 70-80 ml. Other scanning parameters were kept constant. Pulmonary vessel enhancement and image noise were quantified; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Subjective vessel contrast was assessed by two radiologists in consensus. Result: A total of 14 cases of PE (22 %) were found in the evaluated of subjects (nine in group A, and five in group B). All PE cases were detected by the two readers. There was no significant difference in the size or location of the PEs between the two groups, the average image noise was 14 HU for group A and 19 HU for group B. The difference was not statistically significant (p = 0.09). Overall, the SNR and CNR were slightly higher on group B (24.4 and 22.5 respectively) compared with group A (19.4 and 16.4 respectively), but those differences were not statistically significant (p = 0.71 and p = 0.35, respectively). Conclusion and Discussion: Both groups that had been evaluated by pulmonary CTA protocol allow similar image quality to be achieved as compared with each other's, with optimize care dose for both protocol and contrast volume were reduced by 50 % in new protocol comparing to the conventional protocol.

  12. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  13. A conceptual model for worksite intelligent physical exercise training--IPET--intervention for decreasing life style health risk indicators among employees: a randomized controlled trial.

    PubMed

    Sjøgaard, Gisela; Justesen, Just Bendix; Murray, Mike; Dalager, Tina; Søgaard, Karen

    2014-06-26

    Health promotion at the work site in terms of physical activity has proven positive effects but optimization of relevant exercise training protocols and implementation for high adherence are still scanty. The aim of this paper is to present a study protocol with a conceptual model for planning the optimal individually tailored physical exercise training for each worker based on individual health check, existing guidelines and state of the art sports science training recommendations in the broad categories of cardiorespiratory fitness, muscle strength in specific body parts, and functional training including balance training. The hypotheses of this research are that individually tailored worksite-based intelligent physical exercise training, IPET, among workers with inactive job categories will: 1) Improve cardiorespiratory fitness and/or individual health risk indicators, 2) Improve muscle strength and decrease musculoskeletal disorders, 3) Succeed in regular adherence to worksite and leisure physical activity training, and 3) Reduce sickness absence and productivity losses (presenteeism) in office workers. The present RCT study enrolled almost 400 employees with sedentary jobs in the private as well as public sectors. The training interventions last 2 years with measures at baseline as well as one and two years follow-up. If proven effective, the intelligent physical exercise training scheduled as well as the information for its practical implementation can provide meaningful scientifically based information for public health policy. ClinicalTrials.gov, number: NCT01366950.

  14. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons.

    PubMed

    Post, Harm; Penning, Renske; Fitzpatrick, Martin A; Garrigues, Luc B; Wu, W; MacGillavry, Harold D; Hoogenraad, Casper C; Heck, Albert J R; Altelaar, A F Maarten

    2017-02-03

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC-MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts, placing new demands on enrichment protocols to make them less labor-intensive, more sensitive, and less prone to variability. Here we assessed an automated enrichment protocol using Fe(III)-IMAC cartridges on an AssayMAP Bravo platform to meet these demands. The automated Fe(III)-IMAC-based enrichment workflow proved to be more effective when compared to a TiO 2 -based enrichment using the same platform and a manual Ti(IV)-IMAC-based enrichment workflow. As initial samples, a dilution series of both human HeLa cell and primary rat hippocampal neuron lysates was used, going down to 0.1 μg of peptide starting material. The optimized workflow proved to be efficient, sensitive, and reproducible, identifying, localizing, and quantifying thousands of phosphosites from just micrograms of starting material. To further test the automated workflow in genuine biological applications, we monitored EGF-induced signaling in hippocampal neurons, starting with only 200 000 primary cells, resulting in ∼50 μg of protein material. This revealed a comprehensive phosphoproteome, showing regulation of multiple members of the MAPK pathway and reduced phosphorylation status of two glutamate receptors involved in synaptic plasticity.

  15. Incorporating Aptamers in the Multiple Analyte Profiling Assays (xMAP): Detection of C-Reactive Protein.

    PubMed

    Bernard, Elyse D; Nguyen, Kathy C; DeRosa, Maria C; Tayabali, Azam F; Aranda-Rodriguez, Rocio

    2017-01-01

    Aptamers are short oligonucleotide sequences used in detection systems because of their high affinity binding to a variety of macromolecules. With the introduction of aptamers over 25 years ago came the exploration of their use in many different applications as a substitute for antibodies. Aptamers have several advantages; they are easy to synthesize, can bind to analytes for which it is difficult to obtain antibodies, and in some cases bind better than antibodies. As such, aptamer applications have significantly expanded as an adjunct to a variety of different immunoassay designs. The Multiple-Analyte Profiling (xMAP) technology developed by Luminex Corporation commonly uses antibodies for the detection of analytes in small sample volumes through the use of fluorescently coded microbeads. This technology permits the simultaneous detection of multiple analytes in each sample tested and hence could be applied in many research fields. Although little work has been performed adapting this technology for use with apatmers, optimizing aptamer-based xMAP assays would dramatically increase the versatility of analyte detection. We report herein on the development of an xMAP bead-based aptamer/antibody sandwich assay for a biomarker of inflammation (C-reactive protein or CRP). Protocols for the coupling of aptamers to xMAP beads, validation of coupling, and for an aptamer/antibody sandwich-type assay for CRP are detailed. The optimized conditions, protocols and findings described in this research could serve as a starting point for the development of new aptamer-based xMAP assays.

  16. A conceptual model for worksite intelligent physical exercise training - IPET - intervention for decreasing life style health risk indicators among employees: a randomized controlled trial

    PubMed Central

    2014-01-01

    Background Health promotion at the work site in terms of physical activity has proven positive effects but optimization of relevant exercise training protocols and implementation for high adherence are still scanty. Methods/Design The aim of this paper is to present a study protocol with a conceptual model for planning the optimal individually tailored physical exercise training for each worker based on individual health check, existing guidelines and state of the art sports science training recommendations in the broad categories of cardiorespiratory fitness, muscle strength in specific body parts, and functional training including balance training. The hypotheses of this research are that individually tailored worksite-based intelligent physical exercise training, IPET, among workers with inactive job categories will: 1) Improve cardiorespiratory fitness and/or individual health risk indicators, 2) Improve muscle strength and decrease musculoskeletal disorders, 3) Succeed in regular adherence to worksite and leisure physical activity training, and 3) Reduce sickness absence and productivity losses (presenteeism) in office workers. The present RCT study enrolled almost 400 employees with sedentary jobs in the private as well as public sectors. The training interventions last 2 years with measures at baseline as well as one and two years follow-up. Discussion If proven effective, the intelligent physical exercise training scheduled as well as the information for its practical implementation can provide meaningful scientifically based information for public health policy. Trial Registration ClinicalTrials.gov, number: NCT01366950. PMID:24964869

  17. Protein structure modeling and refinement by global optimization in CASP12.

    PubMed

    Hong, Seung Hwan; Joung, InSuk; Flores-Canales, Jose C; Manavalan, Balachandran; Cheng, Qianyi; Heo, Seungryong; Kim, Jong Yun; Lee, Sun Young; Nam, Mikyung; Joo, Keehyoung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2018-03-01

    For protein structure modeling in the CASP12 experiment, we have developed a new protocol based on our previous CASP11 approach. The global optimization method of conformational space annealing (CSA) was applied to 3 stages of modeling: multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain re-modeling. For better template selection and model selection, we updated our model quality assessment (QA) method with the newly developed SVMQA (support vector machine for quality assessment). For 3D chain building, we updated our energy function by including restraints generated from predicted residue-residue contacts. New energy terms for the predicted secondary structure and predicted solvent accessible surface area were also introduced. For difficult targets, we proposed a new method, LEEab, where the template term played a less significant role than it did in LEE, complemented by increased contributions from other terms such as the predicted contact term. For TBM (template-based modeling) targets, LEE performed better than LEEab, but for FM targets, LEEab was better. For model refinement, we modified our CASP11 molecular dynamics (MD) based protocol by using explicit solvents and tuning down restraint weights. Refinement results from MD simulations that used a new augmented statistical energy term in the force field were quite promising. Finally, when using inaccurate information (such as the predicted contacts), it was important to use the Lorentzian function for which the maximal penalty arising from wrong information is always bounded. © 2017 Wiley Periodicals, Inc.

  18. BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation.

    PubMed

    Dudek, Christian-Alexander; Dannheim, Henning; Schomburg, Dietmar

    2017-01-01

    The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de.

  19. BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation

    PubMed Central

    Schomburg, Dietmar

    2017-01-01

    The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de. PMID:28750104

  20. Developing an Anti-Xa-Based Anticoagulation Protocol for Patients with Percutaneous Ventricular Assist Devices.

    PubMed

    Sieg, Adam; Mardis, B Andrew; Mardis, Caitlin R; Huber, Michelle R; New, James P; Meadows, Holly B; Cook, Jennifer L; Toole, J Matthew; Uber, Walter E

    2015-01-01

    Because of the complexities associated with anticoagulation in temporary percutaneous ventricular assist device (pVAD) recipients, a lack of standardization exists in their management. This retrospective analysis evaluates current anticoagulation practices at a single center with the aim of identifying an optimal anticoagulation strategy and protocol. Patients were divided into two cohorts based on pVAD implanted (CentriMag (Thoratec; Pleasanton, CA) / TandemHeart (CardiacAssist; Pittsburgh, PA) or Impella (Abiomed, Danvers, MA)), with each group individually analyzed for bleeding and thrombotic complications. Patients in the CentriMag/TandemHeart cohort were subdivided based on the anticoagulation monitoring strategy (activated partial thromboplastin time (aPTT) or antifactor Xa unfractionated heparin (anti-Xa) values). In the CentriMag/TandemHeart cohort, there were five patients with anticoagulation titrated based on anti-Xa values; one patient developed a device thrombosis and a major bleed, whereas another patient experienced major bleeding. Eight patients received an Impella pVAD. Seven total major bleeds in three patients and no thrombotic events were detected. Based on distinct differences between the devices, anti-Xa values, and outcomes, two protocols were created to guide anticoagulation adjustments. However, anticoagulation in patients who require pVAD support is complex with constantly evolving anticoagulation goals. The ideal level of anticoagulation should be individually determined using several coagulation laboratory parameters in concert with hemodynamic changes in the patient's clinical status, the device, and the device cannulation.

  1. High-Performance CCSDS AOS Protocol Implementation in FPGA

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson

    2010-01-01

    The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.

  2. Gaussian error correction of quantum states in a correlated noisy channel.

    PubMed

    Lassen, Mikael; Berni, Adriano; Madsen, Lars S; Filip, Radim; Andersen, Ulrik L

    2013-11-01

    Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving the performance of various quantum information protocols.

  3. A Long-Distance RF-Powered Sensor Node with Adaptive Power Management for IoT Applications.

    PubMed

    Pizzotti, Matteo; Perilli, Luca; Del Prete, Massimo; Fabbri, Davide; Canegallo, Roberto; Dini, Michele; Masotti, Diego; Costanzo, Alessandra; Franchi Scarselli, Eleonora; Romani, Aldo

    2017-07-28

    We present a self-sustained battery-less multi-sensor platform with RF harvesting capability down to -17 dBm and implementing a standard DASH7 wireless communication interface. The node operates at distances up to 17 m from a 2 W UHF carrier. RF power transfer allows operation when common energy scavenging sources (e.g., sun, heat, etc.) are not available, while the DASH7 communication protocol makes it fully compatible with a standard IoT infrastructure. An optimized energy-harvesting module has been designed, including a rectifying antenna (rectenna) and an integrated nano-power DC/DC converter performing maximum-power-point-tracking (MPPT). A nonlinear/electromagnetic co-design procedure is adopted to design the rectenna, which is optimized to operate at ultra-low power levels. An ultra-low power microcontroller controls on-board sensors and wireless protocol, to adapt the power consumption to the available detected power by changing wake-up policies. As a result, adaptive behavior can be observed in the designed platform, to the extent that the transmission data rate is dynamically determined by RF power. Among the novel features of the system, we highlight the use of nano-power energy harvesting, the implementation of specific hardware/software wake-up policies, optimized algorithms for best sampling rate implementation, and adaptive behavior by the node based on the power received.

  4. A Long-Distance RF-Powered Sensor Node with Adaptive Power Management for IoT Applications

    PubMed Central

    del Prete, Massimo; Fabbri, Davide; Canegallo, Roberto; Dini, Michele; Costanzo, Alessandra

    2017-01-01

    We present a self-sustained battery-less multi-sensor platform with RF harvesting capability down to −17 dBm and implementing a standard DASH7 wireless communication interface. The node operates at distances up to 17 m from a 2 W UHF carrier. RF power transfer allows operation when common energy scavenging sources (e.g., sun, heat, etc.) are not available, while the DASH7 communication protocol makes it fully compatible with a standard IoT infrastructure. An optimized energy-harvesting module has been designed, including a rectifying antenna (rectenna) and an integrated nano-power DC/DC converter performing maximum-power-point-tracking (MPPT). A nonlinear/electromagnetic co-design procedure is adopted to design the rectenna, which is optimized to operate at ultra-low power levels. An ultra-low power microcontroller controls on-board sensors and wireless protocol, to adapt the power consumption to the available detected power by changing wake-up policies. As a result, adaptive behavior can be observed in the designed platform, to the extent that the transmission data rate is dynamically determined by RF power. Among the novel features of the system, we highlight the use of nano-power energy harvesting, the implementation of specific hardware/software wake-up policies, optimized algorithms for best sampling rate implementation, and adaptive behavior by the node based on the power received. PMID:28788084

  5. Ribozyme-based aminoglycoside switches of gene expression engineered by genetic selection in S. cerevisiae.

    PubMed

    Klauser, Benedikt; Atanasov, Janina; Siewert, Lena K; Hartig, Jörg S

    2015-05-15

    Systems for conditional gene expression are powerful tools in basic research as well as in biotechnology. For future applications, it is of great importance to engineer orthogonal genetic switches that function reliably in diverse contexts. RNA-based switches have the advantage that effector molecules interact immediately with regulatory modules inserted into the target RNAs, getting rid of the need of transcription factors usually mediating genetic control. Artificial riboswitches are characterized by their simplicity and small size accompanied by a high degree of modularity. We have recently reported a series of hammerhead ribozyme-based artificial riboswitches that allow for post-transcriptional regulation of gene expression via switching mRNA, tRNA, or rRNA functions. A more widespread application was so far hampered by moderate switching performances and a limited set of effector molecules available. Here, we report the re-engineering of hammerhead ribozymes in order to respond efficiently to aminoglycoside antibiotics. We first established an in vivo selection protocol in Saccharomyces cerevisiae that enabled us to search large sequence spaces for optimized switches. We then envisioned and characterized a novel strategy of attaching the aptamer to the ribozyme catalytic core, increasing the design options for rendering the ribozyme ligand-dependent. These innovations enabled the development of neomycin-dependent RNA modules that switch gene expression up to 25-fold. The presented aminoglycoside-responsive riboswitches belong to the best-performing RNA-based genetic regulators reported so far. The developed in vivo selection protocol should allow for sampling of large sequence spaces for engineering of further optimized riboswitches.

  6. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol.

  7. Contrast Media Administration in Coronary Computed Tomography Angiography - A Systematic Review.

    PubMed

    Mihl, Casper; Maas, Monique; Turek, Jakub; Seehofnerova, Anna; Leijenaar, Ralph T H; Kok, Madeleine; Lobbes, Marc B I; Wildberger, Joachim E; Das, Marco

    2017-04-01

    Background  Various different injection parameters influence enhancement of the coronary arteries. There is no consensus in the literature regarding the optimal contrast media (CM) injection protocol. The aim of this study is to provide an update on the effect of different CM injection parameters on the coronary attenuation in coronary computed tomographic angiography (CCTA). Method  Studies published between January 2001 and May 2014 identified by Pubmed, Embase and MEDLINE were evaluated. Using predefined inclusion criteria and a data extraction form, the content of each eligible study was assessed. Initially, 2551 potential studies were identified. After applying our criteria, 36 studies were found to be eligible. Studies were systematically assessed for quality based on the validated Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-II checklist. Results  Extracted data proved to be heterogeneous and often incomplete. The injection protocol and outcome of the included publications were very diverse and results are difficult to compare. Based on the extracted data, it remains unclear which of the injection parameters is the most important determinant for adequate attenuation. It is likely that one parameter which combines multiple parameters (e. g. IDR) will be the most suitable determinant of coronary attenuation in CCTA protocols. Conclusion  Research should be directed towards determining the influence of different injection parameters and defining individualized optimal IDRs tailored to patient-related factors (ideally in large randomized trials). Key points   · This systematic review provides insight into decisive factors on coronary attenuation.. · Different and contradicting outcomes are reported on coronary attenuation in CCTA.. · One parameter combining multiple parameters (IDR) is likely decisive in coronary attenuation.. · Research should aim at defining individualized optimal IDRs tailored to individual factors.. · Future directions should be tailored towards the influence of different injection parameters.. Citation Format · Mihl C, Maas M, Turek J et al. Contrast Media Administration in Coronary Computed Tomography Angiography - A Systematic Review. Fortschr Röntgenstr 2017; 189: 312 - 325. © Georg Thieme Verlag KG Stuttgart · New York.

  8. SU-E-J-206: Adaptive Radiotherapy for Gynecological Malignancies with MRIGuided Cobolt-60 Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamb, J; Kamrava, M; Agazaryan, N

    Purpose: Even in the IMRT era, bowel toxicity and bone marrow irradiation remain concerns with pelvic irradiation. We examine the potential gain from an adaptive radiotherapy workflow for post-operative gynecological patients treated to pelvic targets including lymph nodes using MRI-guided Co-60 radiation therapy. Methods: An adaptive workflow was developed with the intent of minimizing time overhead of adaptive planning. A pilot study was performed using retrospectively analyzed images from one patient’s treatment. The patient’s treated plan was created using conventional PTV margins. Adaptive treatment was simulated on the patient’s first three fractions. The daily PTV was created by removing non-targetmore » tissue, including bone, muscle and bowel, from the initial PTV based on the daily MRI. The number of beams, beam angles, and optimization parameters were kept constant, and the plan was re-optimized. Normal tissue contours were not adjusted for the re-optimization, but were adjusted for evaluation of plan quality. Plan quality was evaluated based on PTV coverage and normal tissue DVH points per treatment protocol. Bowel was contoured as the entire bowel bag per protocol at our institution. Pelvic bone marrow was contoured per RTOG protocol 1203. Results: For the clinically treated plan, the volume of bowel receiving 45 Gy was 380 cc, 53% of the rectum received 30 Gy, 35% of the bladder received 45 Gy, and 28% of the pelvic bone marrow received 40 Gy. For the adaptive plans, the volume of bowel receiving 45 Gy was 175–201 cc, 55–62% of the rectum received 30 Gy, 21– 27% of the bladder received 45 Gy, and 13–17% of the pelvic bone marrow received 40 Gy. Conclusion: Adaptive planning led to a large reduction of bowel and bone marrow dose in this pilot study. Further study of on-line adaptive techniques for the radiotherapy of pelvic lymph nodes is warranted. Dr. Low is a member of the scientific advisory board of ViewRay, Inc.« less

  9. Implementing clinical protocols in oncology: quality gaps and the learning curve phenomenon.

    PubMed

    Kedikoglou, Simos; Syrigos, Konstantinos; Skalkidis, Yannis; Ploiarchopoulou, Fani; Dessypris, Nick; Petridou, Eleni

    2005-08-01

    The quality improvement effort in clinical practice has focused mostly on 'performance quality', i.e. on the development of comprehensive, evidence-based guidelines. This study aimed to assess the 'conformance quality', i.e. the extent to which guidelines once developed are correctly and consistently applied. It also aimed to assess the existence of quality gaps in the treatment of certain patient segments as defined by age or gender and to investigate methods to improve overall conformance quality. A retrospective audit of clinical practice in a well-defined oncology setting was undertaken and the results compared to those obtained from prospectively applying an internally developed clinical protocol in the same setting and using specific tools to increase conformance quality. All indicators showed improvement after the implementation of the protocol that in many cases reached statistical significance, while in the entire cohort advanced age was associated (although not significantly) with sub-optimal delivery of care. A 'learning curve' phenomenon in the implementation of quality initiatives was detected, with all indicators improving substantially in the second part of the prospective study. Clinicians should pay separate attention to the implementation of chosen protocols and employ specific tools to increase conformance quality in patient care.

  10. Open tubular capillary columns with basic templates made by the generalized preparation protocol in capillary electrochromatography chiral separation and template structural effects on chiral separation capability.

    PubMed

    Zaidi, Shabi Abbas; Lee, Seung Mi; Cheong, Won Jo

    2011-03-04

    Some open tubular (OT) molecule imprinted polymer (MIP) silica capillary columns have been prepared using atenolol, sulpiride, methyl benzylamine (MBA) and (1-naphthyl)-ethylamine (NEA) as templates by the pre-established generalized preparation protocol. The four MIP thin layers of different templates showed quite different morphologies. The racemic selectivity of each MIP column for the template enantiomers was optimized by changing eluent composition and pH. The template structural effects on chiral separation performance have been examined. This work verifies the versatility of the generalized preparation protocol for OT-MIP silica capillary columns by extending its boundary toward templates with basic functional group moieties. This study is the very first report to demonstrate a generalized MIP preparation protocol that is valid for both acidic and basic templates. The chiral separation performances of atenolol and sulpiride by the MIPs of this study were found better than or comparable to those of atenolol and sulpiride obtained by non-MIP separation techniques and those of some basic template enantiomers obtained by MIP based techniques. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. A microfluidic microprocessor: controlling biomimetic containers and cells using hybrid integrated circuit/microfluidic chips.

    PubMed

    Issadore, David; Franke, Thomas; Brown, Keith A; Westervelt, Robert M

    2010-11-07

    We present an integrated platform for performing biological and chemical experiments on a chip based on standard CMOS technology. We have developed a hybrid integrated circuit (IC)/microfluidic chip that can simultaneously control thousands of living cells and pL volumes of fluid, enabling a wide variety of chemical and biological tasks. Taking inspiration from cellular biology, phospholipid bilayer vesicles are used as robust picolitre containers for reagents on the chip. The hybrid chip can be programmed to trap, move, and porate individual living cells and vesicles and fuse and deform vesicles using electric fields. The IC spatially patterns electric fields in a microfluidic chamber using 128 × 256 (32,768) 11 × 11 μm(2) metal pixels, each of which can be individually driven with a radio frequency (RF) voltage. The chip's basic functions can be combined in series to perform complex biological and chemical tasks and can be performed in parallel on the chip's many pixels for high-throughput operations. The hybrid chip operates in two distinct modes, defined by the frequency of the RF voltage applied to the pixels: Voltages at MHz frequencies are used to trap, move, and deform objects using dielectrophoresis and voltages at frequencies below 1 kHz are used for electroporation and electrofusion. This work represents an important step towards miniaturizing the complex chemical and biological experiments used for diagnostics and research onto automated and inexpensive chips.

  12. Characterization of C-PDMS electrodes for electrokinetic applications in microfluidic systems

    NASA Astrophysics Data System (ADS)

    Deman, A.-L.; Brun, M.; Quatresous, M.; Chateaux, J.-F.; Frenea-Robin, M.; Haddour, N.; Semet, V.; Ferrigno, R.

    2011-09-01

    This paper reports on the integration of thick carbon-polydimethylsiloxane (C-PDMS) electrodes in microfluidic systems for electrokinetic operations. The C-PDMS material, obtained by mixing carbon nanopowder and PDMS, preserves PDMS processing properties such as O2 plasma activation and soft-lithography patternability in thick or 3D electrodes. Conductivity in the order of 10 S m-1 was reached for a carbon concentration of 25 wt%. To evaluate the adhesion between PDMS and C-PDMS, we prepared bi-material strips and carried out a manual pull test. The cohesion and robustness of C-PDMS were also evaluated by applying a large range of electric field conditions from dc to ac (300 kHz). No damage to the electrodes or release of carbon was noticed. The use of such a material for electrokinetic manipulation was validated on polystyrene particles and cells. Here, we demonstrate that C-PDMS seems to be a valuable technological solution for electrokinetic in microfluidic and particularly for biological applications such as cell electrofusion, lysis and trapping, which are favored by uniform lateral electric fields across the microchannel section.

  13. Electrostatic bio-manipulation for the modification of cellular functions

    NASA Astrophysics Data System (ADS)

    Washizu, Masao

    2013-03-01

    The use of electrostatic field effects, including field-induced reversible-breakdown of the membrane and dielectrophoresis (DEP), in microfabricated structures are investigated. With the use of field constriction created by a micro-orifice whose diameter is smaller than the cells, controlled magnitude of pulsed voltage can be applied across the cell membrane regardless of the cell size, shape or orientation. As a result, the breakdown occurs reproducibly and with minimal invasiveness. The breakdown is used for two purposes, electroporation by which foreign substances can be fed into cells, and electrofusion which creates genetic and/or cytoplasmic mixture among two cells. When GFP plasmid is fed into MSC cell, the gene expression started within 2 hours, and finally observed in more than 50% of cells. For cell fusion, several ten percent fusion yield is achieved for most cell types, with the colony formation in several percents. Timing-controlled feeding foreign substances or mixing cellular contents, with high-yield and low-invasiveness, is expected to bring about a new technology for both genetic and epigenetic modifications of cellular functions, in such field as regenerative medicine.

  14. What Can Reinforcement Learning Teach Us About Non-Equilibrium Quantum Dynamics

    NASA Astrophysics Data System (ADS)

    Bukov, Marin; Day, Alexandre; Sels, Dries; Weinberg, Phillip; Polkovnikov, Anatoli; Mehta, Pankaj

    Equilibrium thermodynamics and statistical physics are the building blocks of modern science and technology. Yet, our understanding of thermodynamic processes away from equilibrium is largely missing. In this talk, I will reveal the potential of what artificial intelligence can teach us about the complex behaviour of non-equilibrium systems. Specifically, I will discuss the problem of finding optimal drive protocols to prepare a desired target state in quantum mechanical systems by applying ideas from Reinforcement Learning [one can think of Reinforcement Learning as the study of how an agent (e.g. a robot) can learn and perfect a given policy through interactions with an environment.]. The driving protocols learnt by our agent suggest that the non-equilibrium world features possibilities easily defying intuition based on equilibrium physics.

  15. Evaluating Fumonisin Gene Expression in Fusarium verticillioides.

    PubMed

    Scala, Valeria; Visentin, Ivan; Cardinale, Francesca

    2017-01-01

    Transcript levels of key genes in a biosynthetic pathway are often taken as a proxy for metabolite production. This is the case of FUM1, encoding the first dedicated enzyme in the metabolic pathway leading to the production of the mycotoxins Fumonisins by fungal species belonging to the genus Fusarium. FUM1 expression can be quantified by different methods; here, we detail a protocol based on quantitative reverse transcriptase polymerase chain reaction (RT-qPCR), by which relative or absolute transcript abundance can be estimated in Fusaria grown in vitro or in planta. As very seldom commercial kits for RNA extraction and cDNA synthesis are optimized for fungal samples, we developed a protocol tailored for these organisms, which stands alone but can be also easily integrated with specific reagents and kits commercially available.

  16. Synthesis of Stable Citrate-Capped Silver Nanoprisms.

    PubMed

    Haber, Jason; Sokolov, Konstantin

    2017-10-10

    Citrate-stabilized silver nanoprisms (AgNPrs) can be easily functionalized using well-developed thiol based surface chemistry that is an important requirement for biosensor applications utilizing localized surface plasmon resonance (LSPR) and surface-enhanced Raman Scattering (SERS). Unfortunately, currently available protocols for synthesis of citrate-coated AgNPrs do not produce stable nanoparticles thus limiting their usefulness in biosensing applications. Here we address this problem by carrying out a systematic study of citrate-stabilized, peroxide-based synthesis of AgNPrs to optimize reaction conditions for production of stable and reproducible nanoprisms. Our analysis showed that concentration of secondary reducing agent, l-ascorbic acid, is critical to AgNPr stability. Furthermore, we demonstrated that optimization of other synthesis conditions such as stabilizer concentration, rate of silver nitrate addition, and seed dilution result in highly stable nanoprisms with narrow absorbance peaks ranging from 450 nm into near-IR. In addition, the optimized reaction conditions can be used to produce AgNPrs in a one-pot synthesis instead of a previously described two-step reaction. The resulting nanoprisms can readily interact with thiols for easy surface functionalization. These studies provide an optimized set of parameters for precise control of citrate stabilized AgNPr synthesis for biomedical applications.

  17. Highly Efficient Training, Refinement, and Validation of a Knowledge-based Planning Quality-Control System for Radiation Therapy Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Nan; Carmona, Ruben; Sirak, Igor

    Purpose: To demonstrate an efficient method for training and validation of a knowledge-based planning (KBP) system as a radiation therapy clinical trial plan quality-control system. Methods and Materials: We analyzed 86 patients with stage IB through IVA cervical cancer treated with intensity modulated radiation therapy at 2 institutions according to the standards of the INTERTECC (International Evaluation of Radiotherapy Technology Effectiveness in Cervical Cancer, National Clinical Trials Network identifier: 01554397) protocol. The protocol used a planning target volume and 2 primary organs at risk: pelvic bone marrow (PBM) and bowel. Secondary organs at risk were rectum and bladder. Initial unfiltered dose-volumemore » histogram (DVH) estimation models were trained using all 86 plans. Refined training sets were created by removing sub-optimal plans from the unfiltered sample, and DVH estimation models… and DVH estimation models were constructed by identifying 30 of 86 plans emphasizing PBM sparing (comparing protocol-specified dosimetric cutpoints V{sub 10} (percentage volume of PBM receiving at least 10 Gy dose) and V{sub 20} (percentage volume of PBM receiving at least 20 Gy dose) with unfiltered predictions) and another 30 of 86 plans emphasizing bowel sparing (comparing V{sub 40} (absolute volume of bowel receiving at least 40 Gy dose) and V{sub 45} (absolute volume of bowel receiving at least 45 Gy dose), 9 in common with the PBM set). To obtain deliverable KBP plans, refined models must inform patient-specific optimization objectives and/or priorities (an auto-planning “routine”). Four candidate routines emphasizing different tradeoffs were composed, and a script was developed to automatically re-plan multiple patients with each routine. After selection of the routine that best met protocol objectives in the 51-patient training sample (KBP{sub FINAL}), protocol-specific DVH metrics and normal tissue complication probability were compared for original versus KBP{sub FINAL} plans across the 35-patient validation set. Paired t tests were used to test differences between planning sets. Results: KBP{sub FINAL} plans outperformed manual planning across the validation set in all protocol-specific DVH cutpoints. The mean normal tissue complication probability for gastrointestinal toxicity was lower for KBP{sub FINAL} versus validation-set plans (48.7% vs 53.8%, P<.001). Similarly, the estimated mean white blood cell count nadir was higher (2.77 vs 2.49 k/mL, P<.001) with KBP{sub FINAL} plans, indicating lowered probability of hematologic toxicity. Conclusions: This work demonstrates that a KBP system can be efficiently trained and refined for use in radiation therapy clinical trials with minimal effort. This patient-specific plan quality control resulted in improvements on protocol-specific dosimetric endpoints.« less

  18. On the optimality of individual entangling-probe attacks against BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Herbauts, I. M.; Bettelli, S.; Hã¼bel, H.; Peev, M.

    2008-02-01

    Some MIT researchers [Phys. Rev. A 75, 042327 (2007)] have recently claimed that their implementation of the Slutsky-Brandt attack [Phys. Rev. A 57, 2383 (1998); Phys. Rev. A 71, 042312 (2005)] to the BB84 quantum-key-distribution (QKD) protocol puts the security of this protocol “to the test” by simulating “the most powerful individual-photon attack” [Phys. Rev. A 73, 012315 (2006)]. A related unfortunate news feature by a scientific journal [G. Brumfiel, Quantum cryptography is hacked, News @ Nature (april 2007); Nature 447, 372 (2007)] has spurred some concern in the QKD community and among the general public by misinterpreting the implications of this work. The present article proves the existence of a stronger individual attack on QKD protocols with encrypted error correction, for which tight bounds are shown, and clarifies why the claims of the news feature incorrectly suggest a contradiction with the established “old-style” theory of BB84 individual attacks. The full implementation of a quantum cryptographic protocol includes a reconciliation and a privacy-amplification stage, whose choice alters in general both the maximum extractable secret and the optimal eavesdropping attack. The authors of [Phys. Rev. A 75, 042327 (2007)] are concerned only with the error-free part of the so-called sifted string, and do not consider faulty bits, which, in the version of their protocol, are discarded. When using the provably superior reconciliation approach of encrypted error correction (instead of error discard), the Slutsky-Brandt attack is no more optimal and does not “threaten” the security bound derived by Lütkenhaus [Phys. Rev. A 59, 3301 (1999)]. It is shown that the method of Slutsky and collaborators [Phys. Rev. A 57, 2383 (1998)] can be adapted to reconciliation with error correction, and that the optimal entangling probe can be explicitly found. Moreover, this attack fills Lütkenhaus bound, proving that it is tight (a fact which was not previously known).

  19. A Spectrophotometric Assay Optimizing Conditions for Pepsin Activity.

    ERIC Educational Resources Information Center

    Harding, Ethelynda E.; Kimsey, R. Scott

    1998-01-01

    Describes a laboratory protocol optimizing the conditions for the assay of pepsin activity using the Coomasie Blue dye binding assay of protein concentration. The dye bonds through strong, noncovalent interactions to basic and aromatic amino acid residues. (DDR)

  20. Security Protocol Verification and Optimization by Epistemic Model Checking

    DTIC Science & Technology

    2010-11-05

    Three cryptographers are sitting down to dinner at their favourite restau- rant. Their waiter informs them that arrangements have been made with the...Unfortunately, the protocol cannot be expected to satisfy this: suppose that all agents manage to broadcast their mes- sage and all messages have the

Top