Sample records for scalable delay based

  1. Scalable Quantum Networks for Distributed Computing and Sensing

    DTIC Science & Technology

    2016-04-01

    probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01

  2. DISP: Optimizations towards Scalable MPI Startup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Huansong; Pophale, Swaroop S; Gorentla Venkata, Manjunath

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namelymore » Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.« less

  3. A generic interface to reduce the efficiency-stability-cost gap of perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Hou, Yi; Du, Xiaoyan; Scheiner, Simon; McMeekin, David P.; Wang, Zhiping; Li, Ning; Killian, Manuela S.; Chen, Haiwei; Richter, Moses; Levchuk, Ievgen; Schrenker, Nadine; Spiecker, Erdmann; Stubhan, Tobias; Luechinger, Norman A.; Hirsch, Andreas; Schmuki, Patrik; Steinrück, Hans-Peter; Fink, Rainer H.; Halik, Marcus; Snaith, Henry J.; Brabec, Christoph J.

    2017-12-01

    A major bottleneck delaying the further commercialization of thin-film solar cells based on hybrid organohalide lead perovskites is interface loss in state-of-the-art devices. We present a generic interface architecture that combines solution-processed, reliable, and cost-efficient hole-transporting materials without compromising efficiency, stability, or scalability of perovskite solar cells. Tantalum-doped tungsten oxide (Ta-WOx)/conjugated polymer multilayers offer a surprisingly small interface barrier and form quasi-ohmic contacts universally with various scalable conjugated polymers. In a simple device with regular planar architecture and a self-assembled monolayer, Ta-WOx-doped interface-based perovskite solar cells achieve maximum efficiencies of 21.2% and offer more than 1000 hours of light stability. By eliminating additional ionic dopants, these findings open up the entire class of organics as scalable hole-transporting materials for perovskite solar cells.

  4. The Building of Multimedia Communications Network based on Session Initiation Protocol

    NASA Astrophysics Data System (ADS)

    Yuexiao, Han; Yanfu, Zhang

    In this paper, we presented a novel design for a distributed multimedia communications network. We introduced the distributed tactic, flow procedure and particular structure. We also analyzed its scalability, stability, robustness, extension, and transmission delay of this architecture. Finally, the result shows our framework is suitable for very large scale communications.

  5. Multi-Center Traffic Management Advisor Operational Field Test Results

    NASA Technical Reports Server (NTRS)

    Farley, Todd; Landry, Steven J.; Hoang, Ty; Nickelson, Monicarol; Levin, Kerry M.; Rowe, Dennis W.

    2005-01-01

    The Multi-Center Traffic Management Advisor (McTMA) is a research prototype system which seeks to bring time-based metering into the mainstream of air traffic control (ATC) operations. Time-based metering is an efficient alternative to traditional air traffic management techniques such as distance-based spacing (miles-in-trail spacing) and managed arrival reservoirs (airborne holding). While time-based metering has demonstrated significant benefit in terms of arrival throughput and arrival delay, its use to date has been limited to arrival operations at just nine airports nationally. Wide-scale adoption of time-based metering has been hampered, in part, by the limited scalability of metering automation. In order to realize the full spectrum of efficiency benefits possible with time-based metering, a much more modular, scalable time-based metering capability is required. With its distributed metering architecture, multi-center TMA offers such a capability.

  6. Single-photon imager based on a superconducting nanowire delay line

    NASA Astrophysics Data System (ADS)

    Zhao, Qing-Yuan; Zhu, Di; Calandri, Niccolò; Dane, Andrew E.; McCaughan, Adam N.; Bellei, Francesco; Wang, Hao-Zhu; Santavicca, Daniel F.; Berggren, Karl K.

    2017-03-01

    Detecting spatial and temporal information of individual photons is critical to applications in spectroscopy, communication, biological imaging, astronomical observation and quantum-information processing. Here we demonstrate a scalable single-photon imager using a single continuous superconducting nanowire that is not only a single-photon detector but also functions as an efficient microwave delay line. In this context, photon-detection pulses are guided in the nanowire and enable the readout of the position and time of photon-absorption events from the arrival times of the detection pulses at the nanowire's two ends. Experimentally, we slowed down the velocity of pulse propagation to ∼2% of the speed of light in free space. In a 19.7 mm long nanowire that meandered across an area of 286 × 193 μm2, we were able to resolve ∼590 effective pixels with a temporal resolution of 50 ps (full width at half maximum). The nanowire imager presents a scalable approach for high-resolution photon imaging in space and time.

  7. Printed polymer photonic devices for optical interconnect systems

    NASA Astrophysics Data System (ADS)

    Subbaraman, Harish; Pan, Zeyu; Zhang, Cheng; Li, Qiaochu; Guo, L. J.; Chen, Ray T.

    2016-03-01

    Polymer photonic device fabrication usually relies on the utilization of clean-room processes, including photolithography, e-beam lithography, reactive ion etching (RIE) and lift-off methods etc, which are expensive and are limited to areas as large as a wafer. Utilizing a novel and a scalable printing process involving ink-jet printing and imprinting, we have fabricated polymer based photonic interconnect components, such as electro-optic polymer based modulators and ring resonator switches, and thermo-optic polymer switch based delay networks and demonstrated their operation. Specifically, a modulator operating at 15MHz and a 2-bit delay network providing up to 35.4ps are presented. In this paper, we also discuss the manufacturing challenges that need to be overcome in order to make roll-to-roll manufacturing practically viable. We discuss a few manufacturing challenges, such as inspection and quality control, registration, and web control, that need to be overcome in order to realize true implementation of roll-to-roll manufacturing of flexible polymer photonic systems. We have overcome these challenges, and currently utilizing our inhouse developed hardware and software tools, <10μm alignment accuracy at a 5m/min is demonstrated. Such a scalable roll-to-roll manufacturing scheme will enable the development of unique optoelectronic devices which can be used in a myriad of different applications, including communication, sensing, medicine, security, imaging, energy, lighting etc.

  8. On delay adjustment for dynamic load balancing in distributed virtual environments.

    PubMed

    Deng, Yunhua; Lau, Rynson W H

    2012-04-01

    Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.

  9. Defining Tolerance: Impacts of Delay and Disruption when Managing Challenged Networks

    NASA Technical Reports Server (NTRS)

    Birrane, Edward J. III; Burleigh, Scott C.; Cerf, Vint

    2011-01-01

    Challenged networks exhibit irregularities in their communication performance stemming from node mobility, power constraints, and impacts from the operating environment. These irregularities manifest as high signal propagation delay and frequent link disruption. Understanding those limits of link disruption and propagation delay beyond which core networking features fail is an ongoing area of research. Various wireless networking communities propose tools and techniques that address these phenomena. Emerging standardization activities within the Internet Research Task Force (IRTF) and the Consultative Committee for Space Data Systems (CCSDS) look to build upon both this experience and scalability analysis. Successful research in this area is predicated upon identifying enablers for common communication functions (notably node discovery, duplex communication, state caching, and link negotiation) and how increased disruptions and delays affect their feasibility within the network. Networks that make fewer assumptions relating to these enablers provide more universal service. Specifically, reliance on node discovery and link negotiation results in network-specific operational concepts rather than scalable technical solutions. Fundamental to this debate are the definitions, assumptions, operational concepts, and anticipated scaling of these networks. This paper presents the commonalities and differences between delay and disruption tolerance, including support protocols and critical enablers. We present where and how these tolerances differ. We propose a set of use cases that must be accommodated by any standardized delay-tolerant network and discuss the implication of these on existing tool development.

  10. Scalable Optical-Fiber Communication Networks

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  11. Beyond NextGen: AutoMax Overview and Update

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal; Alexandrov, Natalia

    2013-01-01

    Main Message: National and Global Needs - Develop scalable airspace operations management system to accommodate increased mobility needs, emerging airspace uses, mix, future demand. Be affordable and economically viable. Sense of Urgency. Saturation (delays), emerging airspace uses, proactive development. Autonomy is Needed for Airspace Operations to Meet Future Needs. Costs, time critical decisions, mobility, scalability, limits of cognitive workload. AutoMax to Accommodate National and Global Needs. Auto: Automation, autonomy, autonomicity for airspace operations. Max: Maximizing performance of the National Airspace System. Interesting Challenges and Path Forward.

  12. Improved work zone design guidelines and enhanced model of travel delays in work zones : Phase I, portability and scalability of interarrival and service time probability distribution functions for different locations in Ohio and the establishment of impr

    DOT National Transportation Integrated Search

    2006-01-01

    Problem: Work zones on heavily traveled divided highways present problems to motorists in the form of traffic delays and increased accident risks due to sometimes reduced motorist guidance, dense traffic, and other driving difficulties. To minimize t...

  13. Bleak Present, Bright Future: Online Episodic Future Thinking, Scarcity, Delay Discounting, and Food Demand.

    PubMed

    Sze, Yan Yan; Stein, Jeffrey S; Bickel, Warren K; Paluch, Rocco A; Epstein, Leonard H

    2017-07-01

    Obesity is associated with steep discounting of the future and increased food reinforcement. Episodic future thinking (EFT), a type of prospective thinking, has been observed to reduce delay discounting (DD) and improve dietary decision making. In contrast, negative income shock (i.e., abrupt transitions to poverty) has been shown to increase discounting and may worsen dietary decision-making. Scalability of EFT training and protective effects of EFT against simulated negative income shock on DD and demand for food were assessed. In two experiments we showed online-administered EFT reliably reduced DD. Furthermore, EFT reduced DD and demand for fast foods even when challenged by negative income shock. Our findings suggest EFT is a scalable intervention that has implications for improving public health by reducing discounting of the future and demand for high energy dense food.

  14. Model-Based Self-Tuning Multiscale Method for Combustion Control

    NASA Technical Reports Server (NTRS)

    Le, Dzu, K.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.

    2006-01-01

    A multi-scale representation of the combustor dynamics was used to create a self-tuning, scalable controller to suppress multiple instability modes in a liquid-fueled aero engine-derived combustor operating at engine-like conditions. Its self-tuning features designed to handle the uncertainties in the combustor dynamics and time-delays are essential for control performance and robustness. The controller was implemented to modulate a high-frequency fuel valve with feedback from dynamic pressure sensors. This scalable algorithm suppressed pressure oscillations of different instability modes by as much as 90 percent without the peak-splitting effect. The self-tuning logic guided the adjustment of controller parameters and converged quickly toward phase-lock for optimal suppression of the instabilities. The forced-response characteristics of the control model compare well with those of the test rig on both the frequency-domain and the time-domain.

  15. Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott

    2011-01-01

    This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.

  16. Stable scalable control of soliton propagation in broadband nonlinear optical waveguides

    NASA Astrophysics Data System (ADS)

    Peleg, Avner; Nguyen, Quan M.; Huynh, Toan T.

    2017-02-01

    We develop a method for achieving scalable transmission stabilization and switching of N colliding soliton sequences in optical waveguides with broadband delayed Raman response and narrowband nonlinear gain-loss. We show that dynamics of soliton amplitudes in N-sequence transmission is described by a generalized N-dimensional predator-prey model. Stability and bifurcation analysis for the predator-prey model are used to obtain simple conditions on the physical parameters for robust transmission stabilization as well as on-off and off-on switching of M out of N soliton sequences. Numerical simulations for single-waveguide transmission with a system of N coupled nonlinear Schrödinger equations with 2 ≤ N ≤ 4 show excellent agreement with the predator-prey model's predictions and stable propagation over significantly larger distances compared with other broadband nonlinear single-waveguide systems. Moreover, stable on-off and off-on switching of multiple soliton sequences and stable multiple transmission switching events are demonstrated by the simulations. We discuss the reasons for the robustness and scalability of transmission stabilization and switching in waveguides with broadband delayed Raman response and narrowband nonlinear gain-loss, and explain their advantages compared with other broadband nonlinear waveguides.

  17. Cloud-based mobility management in heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Kravchuk, Serhii; Minochkin, Dmytro; Omiotek, Zbigniew; Bainazarov, Ulan; Weryńska-Bieniasz, RóŻa; Iskakova, Aigul

    2017-08-01

    Mobility management is the key feature that supports the roaming of users between different systems. Handover is the essential aspect in the development of solutions supporting mobility scenarios. The handover process becomes more complex in a heterogeneous environment compared to the homogeneous one. Seamlessness and reduction of delay in servicing the handover calls, which can reduce the handover dropping probability, also require complex algorithms to provide a desired QoS for mobile users. A challenging problem to increase the scalability and availability of handover decision mechanisms is discussed. The aim of the paper is to propose cloud based handover as a service concept to cope with the challenges that arise.

  18. How to create successful Open Hardware projects — About White Rabbits and open fields

    NASA Astrophysics Data System (ADS)

    van der Bij, E.; Arruat, M.; Cattin, M.; Daniluk, G.; Gonzalez Cobas, J. D.; Gousiou, E.; Lewis, J.; Lipinski, M. M.; Serrano, J.; Stana, T.; Voumard, N.; Wlostowski, T.

    2013-12-01

    CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  19. Scalable analysis of nonlinear systems using convex optimization

    NASA Astrophysics Data System (ADS)

    Papachristodoulou, Antonis

    In this thesis, we investigate how convex optimization can be used to analyze different classes of nonlinear systems at various scales algorithmically. The methodology is based on the construction of appropriate Lyapunov-type certificates using sum of squares techniques. After a brief introduction on the mathematical tools that we will be using, we turn our attention to robust stability and performance analysis of systems described by Ordinary Differential Equations. A general framework for constrained systems analysis is developed, under which stability of systems with polynomial, non-polynomial vector fields and switching systems, as well estimating the region of attraction and the L2 gain can be treated in a unified manner. We apply our results to examples from biology and aerospace. We then consider systems described by Functional Differential Equations (FDEs), i.e., time-delay systems. Their main characteristic is that they are infinite dimensional, which complicates their analysis. We first show how the complete Lyapunov-Krasovskii functional can be constructed algorithmically for linear time-delay systems. Then, we concentrate on delay-independent and delay-dependent stability analysis of nonlinear FDEs using sum of squares techniques. An example from ecology is given. The scalable stability analysis of congestion control algorithms for the Internet is investigated next. The models we use result in an arbitrary interconnection of FDE subsystems, for which we require that stability holds for arbitrary delays, network topologies and link capacities. Through a constructive proof, we develop a Lyapunov functional for FAST---a recently developed network congestion control scheme---so that the Lyapunov stability properties scale with the system size. We also show how other network congestion control schemes can be analyzed in the same way. Finally, we concentrate on systems described by Partial Differential Equations. We show that axially constant perturbations of the Navier-Stokes equations for Hagen-Poiseuille flow are globally stable, even though the background noise is amplified as R3 where R is the Reynolds number, giving a 'robust yet fragile' interpretation. We also propose a sum of squares methodology for the analysis of systems described by parabolic PDEs. We conclude this work with an account for future research.

  20. The Simulation of Read-time Scalable Coherent Interface

    NASA Technical Reports Server (NTRS)

    Li, Qiang; Grant, Terry; Grover, Radhika S.

    1997-01-01

    Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.

  1. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  2. Mulifunctional Dendritic Emitter: Aggregation-Induced Emission Enhanced, Thermally Activated Delayed Fluorescent Material for Solution-Processed Multilayered Organic Light-Emitting Diodes

    PubMed Central

    Matsuoka, Kenichi; Albrecht, Ken; Yamamoto, Kimihisa; Fujita, Katsuhiko

    2017-01-01

    Thermally activated delayed fluorescence (TADF) materials emerged as promising light sources in third generation organic light-emitting diodes (OLED). Much effort has been invested for the development of small molecular TADF materials and vacuum process-based efficient TADF-OLEDs. In contrast, a limited number of solution processable high-molecular weight TADF materials toward low cost, large area, and scalable manufacturing of solution processed TADF-OLEDs have been reported so far. In this context, we report benzophenone-core carbazole dendrimers (GnB, n = generation) showing TADF and aggregation-induced emission enhancement (AIEE) properties along with alcohol resistance enabling further solution-based lamination of organic materials. The dendritic structure was found to play an important role for both TADF and AIEE activities in the neat films. By using these multifunctional dendritic emitters as non-doped emissive layers, OLED devices with fully solution processed organic multilayers were successfully fabricated and achieved maximum external quantum efficiency of 5.7%. PMID:28139768

  3. Mulifunctional Dendritic Emitter: Aggregation-Induced Emission Enhanced, Thermally Activated Delayed Fluorescent Material for Solution-Processed Multilayered Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Matsuoka, Kenichi; Albrecht, Ken; Yamamoto, Kimihisa; Fujita, Katsuhiko

    2017-01-01

    Thermally activated delayed fluorescence (TADF) materials emerged as promising light sources in third generation organic light-emitting diodes (OLED). Much effort has been invested for the development of small molecular TADF materials and vacuum process-based efficient TADF-OLEDs. In contrast, a limited number of solution processable high-molecular weight TADF materials toward low cost, large area, and scalable manufacturing of solution processed TADF-OLEDs have been reported so far. In this context, we report benzophenone-core carbazole dendrimers (GnB, n = generation) showing TADF and aggregation-induced emission enhancement (AIEE) properties along with alcohol resistance enabling further solution-based lamination of organic materials. The dendritic structure was found to play an important role for both TADF and AIEE activities in the neat films. By using these multifunctional dendritic emitters as non-doped emissive layers, OLED devices with fully solution processed organic multilayers were successfully fabricated and achieved maximum external quantum efficiency of 5.7%.

  4. Software engineering risk factors in the implementation of a small electronic medical record system: the problem of scalability.

    PubMed

    Chiang, Michael F; Starren, Justin B

    2002-01-01

    The successful implementation of clinical information systems is difficult. In examining the reasons and potential solutions for this problem, the medical informatics community may benefit from the lessons of a rich body of software engineering and management literature about the failure of software projects. Based on previous studies, we present a conceptual framework for understanding the risk factors associated with large-scale projects. However, the vast majority of existing literature is based on large, enterprise-wide systems, and it unclear whether those results may be scaled down and applied to smaller projects such as departmental medical information systems. To examine this issue, we discuss the case study of a delayed electronic medical record implementation project in a small specialty practice at Columbia-Presbyterian Medical Center. While the factors contributing to the delay of this small project share some attributes with those found in larger organizations, there are important differences. The significance of these differences for groups implementing small medical information systems is discussed.

  5. Comparison of coherently coupled multi-cavity and quantum dot embedded single cavity systems.

    PubMed

    Kocaman, Serdar; Sayan, Gönül Turhan

    2016-12-12

    Temporal group delays originating from the optical analogue to electromagnetically induced transparency (EIT) are compared in two systems. Similar transmission characteristics are observed between a coherently coupled high-Q multi-cavity array and a single quantum dot (QD) embedded cavity in the weak coupling regime. However, theoretically generated group delay values for the multi-cavity case are around two times higher. Both configurations allow direct scalability for chip-scale optical pulse trapping and coupled-cavity quantum electrodynamics (QED).

  6. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  7. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis

    NASA Astrophysics Data System (ADS)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.

    2012-04-01

    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and decrease the ESB resources on demand. To reify this hybrid ESB centric architecture, we will adopt two complementary approaches: an open source one for scalability and resilience improvement while a commercial one can be used for ultra-speed messaging, whilst we can bridge between these two to support interoperability. In TRIDEC, to manage such a hybrid messaging system, overlay and underlay management techniques will be adopted. The managers (both global and local) will collect, store and update status information (e.g. CPU utilization, free space, number of clients) and balance the usage, throughput, and delays to improve resilience and scalability. The expected resilience improvement includes dynamic failover, self-healing, pre-emptive load balancing, and bottleneck prediction while the expected improvement for scalability includes capacity estimation, Http Bridge, and automatic configuration and reconfiguration (e.g. add or delete clients and servers).

  8. Nine-channel mid-power bipolar pulse generator based on a field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haylock, Ben, E-mail: benjamin.haylock2@griffithuni.edu.au; Lenzini, Francesco; Kasture, Sachin

    Many channel arbitrary pulse sequence generation is required for the electro-optic reconfiguration of optical waveguide networks in Lithium Niobate. Here we describe a scalable solution to the requirement for mid-power bipolar parallel outputs, based on pulse patterns generated by an externally clocked field programmable gate array. Positive and negative pulses can be generated at repetition rates up to 80 MHz with pulse width adjustable in increments of 1.6 ns across nine independent outputs. Each channel can provide 1.5 W of RF power and can be synchronised with the operation of other components in an optical network such as light sourcesmore » and detectors through an external clock with adjustable delay.« less

  9. Remote Energy Monitoring System via Cellular Network

    NASA Astrophysics Data System (ADS)

    Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi

    Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.

  10. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  11. A scalable healthcare information system based on a service-oriented architecture.

    PubMed

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  12. A Scalable Framework for CSI Feedback in FDD Massive MIMO via DL Path Aligning

    NASA Astrophysics Data System (ADS)

    Luo, Xiliang; Cai, Penghao; Zhang, Xiaoyu; Hu, Die; Shen, Cong

    2017-09-01

    Unlike the time-division duplexing (TDD) systems, the downlink (DL) and uplink (UL) channels are not reciprocal anymore in the case of frequency-division duplexing (FDD). However, some long-term parameters, e.g. the time delays and angles of arrival (AoAs) of the channel paths, still enjoy reciprocity. In this paper, by efficiently exploiting the aforementioned limited reciprocity, we address the DL channel state information (CSI) feedback in a practical wideband massive multiple-input multiple-output (MIMO) system operating in the FDD mode. With orthogonal frequency-division multiplexing (OFDM) waveform and assuming frequency-selective fading channels, we propose a scalable framework for the DL pilots design, DL CSI acquisition, and the corresponding CSI feedback in the UL. In particular, the base station (BS) can transmit the FFT-based pilots with the carefully-selected phase shifts. Then the user can rely on the so-called time-domain aggregate channel (TAC) to derive the feedback of reduced imensionality according to either its own knowledge about the statistics of the DL channels or the instruction from the serving BS. We demonstrate that each user can just feed back one scalar number per DL channel path for the BS to recover the DL CSIs. Comprehensive numerical results further corroborate our designs.

  13. Parallel processing using an optical delay-based reservoir computer

    NASA Astrophysics Data System (ADS)

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy

    2016-04-01

    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  14. Regulation-Structured Dynamic Metabolic Model Provides a Potential Mechanism for Delayed Enzyme Response in Denitrification Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Thomas, Dennis G.; Stegen, James C.

    In a recent study of denitrification dynamics in hyporheic zone sediments, we observed a significant time lag (up to several days) in enzymatic response to the changes in substrate concentration. To explore an underlying mechanism and understand the interactive dynamics between enzymes and nutrients, we developed a trait-based model that associates a community’s traits with functional enzymes, instead of typically used species guilds (or functional guilds). This enzyme-based formulation allows to collectively describe biogeochemical functions of microbial communities without directly parameterizing the dynamics of species guilds, therefore being scalable to complex communities. As a key component of modeling, we accountedmore » for microbial regulation occurring through transcriptional and translational processes, the dynamics of which was parameterized based on the temporal profiles of enzyme concentrations measured using a new signature peptide-based method. The simulation results using the resulting model showed several days of a time lag in enzymatic responses as observed in experiments. Further, the model showed that the delayed enzymatic reactions could be primarily controlled by transcriptional responses and that the dynamics of transcripts and enzymes are closely correlated. The developed model can serve as a useful tool for predicting biogeochemical processes in natural environments, either independently or through integration with hydrologic flow simulators.« less

  15. In Vivo Mitochondrial Oxygen Tension Measured by a Delayed Fluorescence Lifetime Technique

    PubMed Central

    Mik, Egbert G.; Johannes, Tanja; Zuurbier, Coert J.; Heinen, Andre; Houben-Weerts, Judith H. P. M.; Balestra, Gianmarco M.; Stap, Jan; Beek, Johan F.; Ince, Can

    2008-01-01

    Mitochondrial oxygen tension (mitoPO2) is a key parameter for cellular function, which is considered to be affected under various pathophysiological circumstances. Although many techniques for assessing in vivo oxygenation are available, no technique for measuring mitoPO2 in vivo exists. Here we report in vivo measurement of mitoPO2 and the recovery of mitoPO2 histograms in rat liver by a novel optical technique under normal and pathological circumstances. The technique is based on oxygen-dependent quenching of the delayed fluorescence lifetime of protoporphyrin IX. Application of 5-aminolevulinic acid enhanced mitochondrial protoporphyrin IX levels and induced oxygen-dependent delayed fluorescence in various tissues, without affecting mitochondrial respiration. Using fluorescence microscopy, we demonstrate in isolated hepatocytes that the signal is of mitochondrial origin. The delayed fluorescence lifetime was calibrated in isolated hepatocytes and isolated perfused livers. Ultimately, the technique was applied to measure mitoPO2 in rat liver in vivo. The results demonstrate mitoPO2 values of ∼30–40 mmHg. mitoPO2 was highly sensitive to small changes in inspired oxygen concentration around atmospheric oxygen level. Ischemia-reperfusion interventions showed altered mitoPO2 distribution, which flattened overall compared to baseline conditions. The reported technology is scalable from microscopic to macroscopic applications, and its reliance on an endogenous compound greatly enhances its potential field of applications. PMID:18641065

  16. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    ERIC Educational Resources Information Center

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  17. Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi

    2014-03-27

    This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less

  18. WHO Parents Skills Training (PST) programme for children with developmental disorders and delays delivered by Family Volunteers in rural Pakistan: study protocol for effectiveness implementation hybrid cluster randomized controlled trial.

    PubMed

    Hamdani, S U; Akhtar, P; Zill-E-Huma; Nazir, H; Minhas, F A; Sikander, S; Wang, D; Servilli, C; Rahman, A

    2017-01-01

    Development disorders and delays are recognised as a public health priority and included in the WHO mental health gap action programme (mhGAP). Parents Skills Training (PST) is recommended as a key intervention for such conditions under the WHO mhGAP intervention guide. However, sustainable and scalable delivery of such evidence based interventions remains a challenge. This study aims to evaluate the effectiveness and scaled-up implementation of locally adapted WHO PST programme delivered by family volunteers in rural Pakistan. The study is a two arm single-blind effectiveness implementation-hybrid cluster randomised controlled trial. WHO PST programme will be delivered by 'family volunteers' to the caregivers of children with developmental disorders and delays in community-based settings. The intervention consists of the WHO PST along with the WHO mhGAP intervention for developmental disorders adapted for delivery using the android application on a tablet device. A total of 540 parent-child dyads will be recruited from 30 clusters. The primary outcome is child's functioning, measured by WHO Disability Assessment Schedule - child version (WHODAS-Child) at 6 months post intervention. Secondary outcomes include children's social communication and joint engagement with their caregiver, social emotional well-being, parental health related quality of life, family empowerment and stigmatizing experiences. Mixed method will be used to collect data on implementation outcomes. Trial has been retrospectively registered at ClinicalTrials.gov (NCT02792894). This study addresses implementation challenges in the real world by incorporating evidence-based intervention strategies with social, technological and business innovations. If proven effective, the study will contribute to scaled-up implementation of evidence-based packages for public mental health in low resource settings. Registered with ClinicalTrials.gov as Family Networks (FaNs) for Children with Developmental Disorders and Delays. Identifier: NCT02792894 Registered on 6 July 2016.

  19. Cluster Based Location-Aided Routing Protocol for Large Scale Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Dong, Liang; Liang, Taotao; Yang, Xinyu; Zhang, Deyun

    Routing algorithms with low overhead, stable link and independence of the total number of nodes in the network are essential for the design and operation of the large-scale wireless mobile ad hoc networks (MANET). In this paper, we develop and analyze the Cluster Based Location-Aided Routing Protocol for MANET (C-LAR), a scalable and effective routing algorithm for MANET. C-LAR runs on top of an adaptive cluster cover of the MANET, which can be created and maintained using, for instance, the weight-based distributed algorithm. This algorithm takes into consideration the node degree, mobility, relative distance, battery power and link stability of mobile nodes. The hierarchical structure stabilizes the end-to-end communication paths and improves the networks' scalability such that the routing overhead does not become tremendous in large scale MANET. The clusterheads form a connected virtual backbone in the network, determine the network's topology and stability, and provide an efficient approach to minimizing the flooding traffic during route discovery and speeding up this process as well. Furthermore, it is fascinating and important to investigate how to control the total number of nodes participating in a routing establishment process so as to improve the network layer performance of MANET. C-LAR is to use geographical location information provided by Global Position System to assist routing. The location information of destination node is used to predict a smaller rectangle, isosceles triangle, or circle request zone, which is selected according to the relative location of the source and the destination, that covers the estimated region in which the destination may be located. Thus, instead of searching the route in the entire network blindly, C-LAR confines the route searching space into a much smaller estimated range. Simulation results have shown that C-LAR outperforms other protocols significantly in route set up time, routing overhead, mean delay and packet collision, and simultaneously maintains low average end-to-end delay, high success delivery ratio, low control overhead, as well as low route discovery frequency.

  20. Is supervision necessary? Examining the effects of internet-based CBT training with and without supervision.

    PubMed

    Rakovshik, Sarah G; McManus, Freda; Vazquez-Montes, Maria; Muse, Kate; Ougrin, Dennis

    2016-03-01

    To investigate the effect of Internet-based training (IBT), with and without supervision, on therapists' (N = 61) cognitive-behavioral therapy (CBT) skills in routine clinical practice. Participants were randomized into 3 conditions: (1) Internet-based training with use of a consultation worksheet (IBT-CW); (2) Internet-based training with CBT supervision via Skype (IBT-S); and (3) "delayed-training" controls (DTs), who did not receive the training until all data collection was completed. The IBT participants received access to training over a period of 3 months. CBT skills were evaluated at pre-, mid- and posttraining/wait using assessor competence ratings of recorded therapy sessions. Hierarchical linear analysis revealed that the IBT-S participants had significantly greater CBT competence at posttraining than did IBT-CW and DT participants at both the mid- and posttraining/wait assessment points. There were no significant differences between IBT-CW and the delayed (no)-training DTs. IBT programs that include supervision may be a scalable and effective method of disseminating CBT into routine clinical practice, particularly for populations without ready access to more-traditional "live" methods of training. There was no evidence for a significant effect of IBT without supervision over a nontraining control, suggesting that merely providing access to IBT programs may not be an effective method of disseminating CBT to routine clinical practice. (c) 2016 APA, all rights reserved).

  1. A modified implementation of tristate inverter based static master-slave flip-flop with improved power-delay-area product.

    PubMed

    Singh, Kunwar; Tiwari, Satish Chandra; Gupta, Maneesha

    2014-01-01

    The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C(2)MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C(2)MOS based flip-flop designs mC(2)MOSff1 and mC(2)MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC(2)MOSff1. Postlayout simulations indicate that mC(2)MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes.

  2. A Modified Implementation of Tristate Inverter Based Static Master-Slave Flip-Flop with Improved Power-Delay-Area Product

    PubMed Central

    Tiwari, Satish Chandra; Gupta, Maneesha

    2014-01-01

    The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C2MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C2MOS based flip-flop designs mC2MOSff1 and mC2MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC2MOSff1. Postlayout simulations indicate that mC2MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes. PMID:24723808

  3. Improved work zone design guidelines and enhanced model of travel delays in work zones : Phase I, portability and scalability of interarrival and service time probability distribution functions for different locations in Ohio and the establishment of impr

    DOT National Transportation Integrated Search

    2006-01-01

    The project focuses on two major issues - the improvement of current work zone design practices and an analysis of : vehicle interarrival time (IAT) and speed distributions for the development of a digital computer simulation model for : queues and t...

  4. High-performance and scalable metal-chalcogenide semiconductors and devices via chalco-gel routes

    PubMed Central

    Jo, Jeong-Wan; Kim, Hee-Joong; Kwon, Hyuck-In; Kim, Jaekyun; Ahn, Sangdoo; Kim, Yong-Hoon; Lee, Hyung-ik

    2018-01-01

    We report a general strategy for obtaining high-quality, large-area metal-chalcogenide semiconductor films from precursors combining chelated metal salts with chalcoureas or chalcoamides. Using conventional organic solvents, such precursors enable the expeditious formation of chalco-gels, which are easily transformed into the corresponding high-performance metal-chalcogenide thin films with large, uniform areas. Diverse metal chalcogenides and their alloys (MQx: M = Zn, Cd, In, Sb, Pb; Q = S, Se, Te) are successfully synthesized at relatively low processing temperatures (<400°C). The versatility of this scalable route is demonstrated by the fabrication of large-area thin-film transistors (TFTs), optoelectronic devices, and integrated circuits on a 4-inch Si wafer and 2.5-inch borosilicate glass substrates in ambient air using CdS, CdSe, and In2Se3 active layers. The CdSe TFTs exhibit a maximum field-effect mobility greater than 300 cm2 V−1 s−1 with an on/off current ratio of >107 and good operational stability (threshold voltage shift < 0.5 V at a positive gate bias stress of 10 ks). In addition, metal chalcogenide–based phototransistors with a photodetectivity of >1013 Jones and seven-stage ring oscillators operating at a speed of ~2.6 MHz (propagation delay of < 27 ns per stage) are demonstrated. PMID:29662951

  5. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  6. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  7. Moving image analysis to the cloud: A case study with a genome-scale tomographic study

    NASA Astrophysics Data System (ADS)

    Mader, Kevin; Stampanoni, Marco

    2016-01-01

    Over the last decade, the time required to measure a terabyte of microscopic imaging data has gone from years to minutes. This shift has moved many of the challenges away from experimental design and measurement to scalable storage, organization, and analysis. As many scientists and scientific institutions lack training and competencies in these areas, major bottlenecks have arisen and led to substantial delays and gaps between measurement, understanding, and dissemination. We present in this paper a framework for analyzing large 3D datasets using cloud-based computational and storage resources. We demonstrate its applicability by showing the setup and costs associated with the analysis of a genome-scale study of bone microstructure. We then evaluate the relative advantages and disadvantages associated with local versus cloud infrastructures.

  8. Space Network Time Distribution and Synchronization Protocol Development for Mars Proximity Link

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Gao, Jay L.; Mills, David

    2010-01-01

    Time distribution and synchronization in deep space network are challenging due to long propagation delays, spacecraft movements, and relativistic effects. Further, the Network Time Protocol (NTP) designed for terrestrial networks may not work properly in space. In this work, we consider the time distribution protocol based on time message exchanges similar to Network Time Protocol (NTP). We present the Proximity-1 Space Link Interleaved Time Synchronization (PITS) algorithm that can work with the CCSDS Proximity-1 Space Data Link Protocol. The PITS algorithm provides faster time synchronization via two-way time transfer over proximity links, improves scalability as the number of spacecraft increase, lowers storage space requirement for collecting time samples, and is robust against packet loss and duplication which underlying protocol mechanisms provide.

  9. AsyncStageOut: Distributed user data management for CMS Analysis

    NASA Astrophysics Data System (ADS)

    Riahi, H.; Wildish, T.; Ciangottini, D.; Hernández, J. M.; Andreeva, J.; Balcas, J.; Karavakis, E.; Mascheroni, M.; Tanasijczuk, A. J.; Vaandering, E. W.

    2015-12-01

    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.

  10. AsyncStageOut: Distributed User Data Management for CMS Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, H.; Wildish, T.; Ciangottini, D.

    2015-12-23

    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirementsmore » for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.« less

  11. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  12. Impact of scaling voltage and size on the performance of Side-contacted Field Effect Diode

    NASA Astrophysics Data System (ADS)

    Touchaei, Behnam Jafari; Manavizadeh, Negin

    2018-05-01

    Side-contacted Fild Effect Diode (S-FED), with low leakage current and high Ion/Ioff ratio, has been recently introduced to suppress short channel effects in nanoscale regime. The voltage and size scalability of S-FEDs and effects on the power consumption, propagation delay time, and power delay product have been studied in this article. The most attractive properties are related to channel length to channel thickness ratio in the S-FED which reduces in comparison with MOSFET significantly, while gates control over the channel improve and the off-state current reduces dramatically. This promising advantage is not only capable to improve important S-FED's characteristics such as subthreshold slope but also eliminate Latch-up and floating body effect.

  13. Preventing delayed diagnosis of cancer: clinicians’ views on main problems and solutions

    PubMed Central

    Car, Lorainne Tudor; Papachristou, Nikolaos; Urch, Catherine; Majeed, Azeem; El–Khatib, Mona; Aylin, Paul; Atun, Rifat; Car, Josip; Vincent, Charles

    2016-01-01

    Background Delayed diagnosis is a major contributing factor to the UK’s lower cancer survival compared to many European countries. In the UK, there is a significant national variation in early cancer diagnosis. Healthcare providers can offer an insight into local priorities for timely cancer diagnosis. In this study, we aimed to identify the main problems and solutions relating to delay cancer diagnosis according to cancer care clinicians. Methods We developed and implemented a new priority–setting approach called PRIORITIZE and invited North West London cancer care clinicians to identify and prioritize main causes for and solutions to delayed diagnosis of cancer care. Results Clinicians identified a number of concrete problems and solutions relating to delayed diagnosis of cancer. Raising public awareness, patient education as well as better access to specialist care and diagnostic testing were seen as the highest priorities. The identified suggestions focused mostly on the delays during referrals from primary to secondary care. Conclusions Many identified priorities were feasible, affordable and converged around common themes such as public awareness, care continuity and length of consultation. As a timely, proactive and scalable priority–setting approach, PRIORITZE could be implemented as a routine preventative system for determining patient safety issues by frontline staff. PMID:28028437

  14. Scalable Lunar Surface Networks and Adaptive Orbit Access

    NASA Technical Reports Server (NTRS)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  15. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  16. A Randomized Trial to Evaluate the Efficacy of a Web-Based HIV Behavioral Intervention for High-Risk African American Women.

    PubMed

    Billings, Douglas W; Leaf, Samantha L; Spencer, Joy; Crenshaw, Terrlynn; Brockington, Sheila; Dalal, Reeshad S

    2015-07-01

    The aim of this study was to develop and test a cost-effective, scalable HIV behavioral intervention for African American women. Eighty-three African American women were recruited from a community health center and randomly assigned to either the web-based Safe Sistah program or to a delayed HIV education control condition. The primary outcome was self-reported condom use. Secondary measures assessed other aspects of the gender-focused training included in Safe Sistah. Participants completed self-report assessments prior to randomization, 1- and 4-months after their program experience. Across the entire study period, women in the experimental condition significantly increased their condom use relative to controls (F = 5.126, p = 0.027). Significant effects were also found for sexual communication, sex refusal, condom use after alcohol consumption, and HIV prevention knowledge. These findings indicate that this web-based program could be an important component in reducing the HIV disparities among African American women.

  17. Equalizer: a scalable parallel rendering framework.

    PubMed

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  18. Ambient-aware continuous care through semantic context dissemination.

    PubMed

    Ongenae, Femke; Famaey, Jeroen; Verstichel, Stijn; De Zutter, Saar; Latré, Steven; Ackaert, Ann; Verhoeve, Piet; De Turck, Filip

    2014-12-04

    The ultimate ambient-intelligent care room contains numerous sensors and devices to monitor the patient, sense and adjust the environment and support the staff. This sensor-based approach results in a large amount of data, which can be processed by current and future applications, e.g., task management and alerting systems. Today, nurses are responsible for coordinating all these applications and supplied information, which reduces the added value and slows down the adoption rate.The aim of the presented research is the design of a pervasive and scalable framework that is able to optimize continuous care processes by intelligently reasoning on the large amount of heterogeneous care data. The developed Ontology-based Care Platform (OCarePlatform) consists of modular components that perform a specific reasoning task. Consequently, they can easily be replicated and distributed. Complex reasoning is achieved by combining the results of different components. To ensure that the components only receive information, which is of interest to them at that time, they are able to dynamically generate and register filter rules with a Semantic Communication Bus (SCB). This SCB semantically filters all the heterogeneous care data according to the registered rules by using a continuous care ontology. The SCB can be distributed and a cache can be employed to ensure scalability. A prototype implementation is presented consisting of a new-generation nurse call system supported by a localization and a home automation component. The amount of data that is filtered and the performance of the SCB are evaluated by testing the prototype in a living lab. The delay introduced by processing the filter rules is negligible when 10 or fewer rules are registered. The OCarePlatform allows disseminating relevant care data for the different applications and additionally supports composing complex applications from a set of smaller independent components. This way, the platform significantly reduces the amount of information that needs to be processed by the nurses. The delay resulting from processing the filter rules is linear in the amount of rules. Distributed deployment of the SCB and using a cache allows further improvement of these performance results.

  19. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  20. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  1. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks

    PubMed Central

    Abba, Sani; Lee, Jeong-A

    2015-01-01

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236

  2. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.

    PubMed

    Abba, Sani; Lee, Jeong-A

    2015-08-18

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.

  3. Performance analysis of data intensive cloud systems based on data management and replication: a survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malik, Saif Ur Rehman; Khan, Samee U.; Ewen, Sam J.

    2015-03-14

    As we delve deeper into the ‘Digital Age’, we witness an explosive growth in the volume, velocity, and variety of the data available on the Internet. For example, in 2012 about 2.5 quintillion bytes of data was created on a daily basis that originated from myriad of sources and applications including mobiledevices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, etc. Such ‘Data Explosions’ has led to one of the most challenging research issues of the current Information and Communication Technology era: how to optimally manage (e.g., store, replicated, filter, and the like) such large amountmore » of data and identify new ways to analyze large amounts of data for unlocking information. It is clear that such large data streams cannot be managed by setting up on-premises enterprise database systems as it leads to a large up-front cost in buying and administering the hardware and software systems. Therefore, next generation data management systems must be deployed on cloud. The cloud computing paradigm provides scalable and elastic resources, such as data and services accessible over the Internet Every Cloud Service Provider must assure that data is efficiently processed and distributed in a way that does not compromise end-users’ Quality of Service (QoS) in terms of data availability, data search delay, data analysis delay, and the like. In the aforementioned perspective, data replication is used in the cloud for improving the performance (e.g., read and write delay) of applications that access data. Through replication a data intensive application or system can achieve high availability, better fault tolerance, and data recovery. In this paper, we survey data management and replication approaches (from 2007 to 2011) that are developed by both industrial and research communities. The focus of the survey is to discuss and characterize the existing approaches of data replication and management that tackle the resource usage and QoS provisioning with different levels of efficiencies. Moreover, the breakdown of both influential expressions (data replication and management) to provide different QoS attributes is deliberated. Furthermore, the performance advantages and disadvantages of data replication and management approaches in the cloud computing environments are analyzed. Open issues and future challenges related to data consistency, scalability, load balancing, processing and placement are also reported.« less

  4. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.

    PubMed

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/

  5. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  6. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  7. A High-Speed Design of Montgomery Multiplier

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi

    With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.

  8. Scalable service architecture for providing strong service guarantees

    NASA Astrophysics Data System (ADS)

    Christin, Nicolas; Liebeherr, Joerg

    2002-07-01

    For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.

  9. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  10. Adaptive format conversion for scalable video coding

    NASA Astrophysics Data System (ADS)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  11. Load Balancing in Distributed Web Caching: A Novel Clustering Approach

    NASA Astrophysics Data System (ADS)

    Tiwari, R.; Kumar, K.; Khan, G.

    2010-11-01

    The World Wide Web suffers from scaling and reliability problems due to overloaded and congested proxy servers. Caching at local proxy servers helps, but cannot satisfy more than a third to half of requests; more requests are still sent to original remote origin servers. In this paper we have developed an algorithm for Distributed Web Cache, which incorporates cooperation among proxy servers of one cluster. This algorithm uses Distributed Web Cache concepts along with static hierarchies with geographical based clusters of level one proxy server with dynamic mechanism of proxy server during the congestion of one cluster. Congestion and scalability problems are being dealt by clustering concept used in our approach. This results in higher hit ratio of caches, with lesser latency delay for requested pages. This algorithm also guarantees data consistency between the original server objects and the proxy cache objects.

  12. A remote instruction system empowered by tightly shared haptic sensation

    NASA Astrophysics Data System (ADS)

    Nishino, Hiroaki; Yamaguchi, Akira; Kagawa, Tsuneo; Utsumiya, Kouichi

    2007-09-01

    We present a system to realize an on-line instruction environment among physically separated participants based on a multi-modal communication strategy. In addition to visual and acoustic information, commonly used communication modalities in network environments, our system provides a haptic channel to intuitively conveying partners' sense of touch. The human touch sensation, however, is very sensitive for delays and jitters in the networked virtual reality (NVR) systems. Therefore, a method to compensate for such negative factors needs to be provided. We show an NVR architecture to implement a basic framework that can be shared by various applications and effectively deals with the problems. We take a hybrid approach to implement both data consistency by client-server and scalability by peer-to-peer models. As an application system built on the proposed architecture, a remote instruction system targeted at teaching handwritten characters and line patterns on a Korea-Japan high-speed research network also is mentioned.

  13. Image analysis supported moss cell disruption in photo-bioreactors.

    PubMed

    Lucumi, A; Posten, C; Pons, M-N

    2005-05-01

    Diverse methods for the disruption of cell entanglements and pellets of the moss Physcomitrella patens were tested in order to improve the homogeneity of suspension cultures. The morphological characterization of the moss was carried out by means of image analysis. Selected morphological parameters were defined and compared to the reduction of the carbon dioxide fixation, and the released pigments after cell disruption. The size control of the moss entanglements based on the rotor stator principle allowed a focused shear stress, avoiding a severe reduction in the photosynthesis. Batch cultures of P. patens in a 30.0-l pilot tubular photo-bioreactor with cell disruption showed no significant variation in growth rate and a delayed cell differentiation, when compared to undisrupted cultures. A highly controlled photoautotrophic culture of P. patens in a scalable photo-bioreactor was established, contributing to the development required for the future use of mosses as producers of relevant heterologous proteins.

  14. Innovative Method for Developing a Helium Pressurant Tank Suitable for the Upper Stage Flight Experiment

    NASA Technical Reports Server (NTRS)

    DeLay, Tom K.; Munafo, Paul (Technical Monitor)

    2001-01-01

    The AFRL USFE project is an experimental test bed for new propulsion technologies. It will utilize ambient temperature fuel and oxidizers (Kerosene and Hydrogen peroxide). The system is pressure fed, not pump fed, and will utilize a helium pressurant tank to drive the system. Mr. DeLay has developed a method for cost effectively producing a unique, large pressurant tank that is not commercially available. The pressure vessel is a layered composite structure with an electroformed metallic permeation barrier. The design/process is scalable and easily adaptable to different configurations with minimal cost in tooling development 1/3 scale tanks have already been fabricated and are scheduled for testing. The full-scale pressure vessel (50" diameter) design will be refined based on the performance of the sub-scale tank. The pressure vessels have been designed to operate at 6,000 psi. a PV/W of 1.92 million is anticipated.

  15. Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks

    PubMed Central

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed. PMID:22346584

  16. Self-configuration and self-optimization process in heterogeneous wireless networks.

    PubMed

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network's scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed.

  17. LHCb experience with LFC replication

    NASA Astrophysics Data System (ADS)

    Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.

    2008-07-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  18. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop

    PubMed Central

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054

  19. Scuba: scalable kernel-based gene prioritization.

    PubMed

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  20. QCA Gray Code Converter Circuits Using LTEx Methodology

    NASA Astrophysics Data System (ADS)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-07-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  1. QCA Gray Code Converter Circuits Using LTEx Methodology

    NASA Astrophysics Data System (ADS)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-04-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  2. A microfluidic timer for timed valving and pumping in centrifugal microfluidics.

    PubMed

    Schwemmer, F; Zehnle, S; Mark, D; von Stetten, F; Zengerle, R; Paust, N

    2015-03-21

    Accurate timing of microfluidic operations is essential for the automation of complex laboratory workflows, in particular for the supply of sample and reagents. Here we present a new unit operation for timed valving and pumping in centrifugal microfluidics. It is based on temporary storage of pneumatic energy and time delayed sudden release of said energy. The timer is loaded at a relatively higher spinning frequency. The countdown is started by reducing to a relatively lower release frequency, at which the timer is released after a pre-defined delay time. We demonstrate timing for 1) the sequential release of 4 liquids at times of 2.7 s ± 0.2 s, 14.0 s ± 0.5 s, 43.4 s ± 1.0 s and 133.8 s ± 2.3 s, 2) timed valving of typical assay reagents (contact angles 36-78°, viscosities 0.9-5.6 mPa s) and 3) on demand valving of liquids from 4 inlet chambers in any user defined sequence controlled by the spinning protocol. The microfluidic timer is compatible to all wetting properties and viscosities of common assay reagents and does neither require assistive equipment, nor coatings. It can be monolithically integrated into a microfluidic test carrier and is compatible to scalable fabrication technologies such as thermoforming or injection molding.

  3. Predictive functional control for active queue management in congested TCP/IP networks.

    PubMed

    Bigdeli, N; Haeri, M

    2009-01-01

    Predictive functional control (PFC) as a new active queue management (AQM) method in dynamic TCP networks supporting explicit congestion notification (ECN) is proposed. The ability of the controller in handling system delay along with its simplicity and low computational load makes PFC a privileged AQM method in the high speed networks. Besides, considering the disturbance term (which represents model/process mismatches, external disturbances, and existing noise) in the control formulation adds some level of robustness into the PFC-AQM controller. This is an important and desired property in the control of dynamically-varying computer networks. In this paper, the controller is designed based on a small signal linearized fluid-flow model of the TCP/AQM networks. Then, closed-loop transfer function representation of the system is derived to analyze the robustness with respect to the network and controller parameters. The analytical as well as the packet-level ns-2 simulation results show the out-performance of the developed controller for both queue regulation and resource utilization. Fast response, low queue fluctuations (and consequently low delay jitter), high link utilization, good disturbance rejection, scalability, and low packet marking probability are other features of the developed method with respect to other well-known AQM methods such as RED, PI, and REM which are also simulated for comparison.

  4. Improved inter-layer prediction for light field content coding with display scalability

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo

    2016-09-01

    Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.

  5. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  6. Introducing keytagging, a novel technique for the protection of medical image-based tests.

    PubMed

    Rubio, Óscar J; Alesanco, Álvaro; García, José

    2015-08-01

    This paper introduces keytagging, a novel technique to protect medical image-based tests by implementing image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. It relies on the association of tags (binary data strings) to stable, semistable or volatile features of the image, whose access keys (called keytags) depend on both the image and the tag content. Unlike watermarking, this technique can associate information to the most stable features of the image without distortion. Thus, this method preserves the clinical content of the image without the need for assessment, prevents eavesdropping and collusion attacks, and obtains a substantial capacity-robustness tradeoff with simple operations. The evaluation of this technique, involving images of different sizes from various acquisition modalities and image modifications that are typical in the medical context, demonstrates that all the aforementioned security measures can be implemented simultaneously and that the algorithm presents good scalability. In addition to this, keytags can be protected with standard Cryptographic Message Syntax and the keytagging process can be easily combined with JPEG2000 compression since both share the same wavelet transform. This reduces the delays for associating keytags and retrieving the corresponding tags to implement the aforementioned measures to only ≃30 and ≃90ms respectively. As a result, keytags can be seamlessly integrated within DICOM, reducing delays and bandwidth when the image test is updated and shared in secure architectures where different users cooperate, e.g. physicians who interpret the test, clinicians caring for the patient and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    NASA Astrophysics Data System (ADS)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.

  8. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  9. Video Conferences through the Internet: How to Survive in a Hostile Environment

    PubMed Central

    Fernández, Carlos; Fernández-Navajas, Julián; Sequeira, Luis; Casadesus, Luis

    2014-01-01

    This paper analyzes and compares two different video conference solutions, widely used in corporate and home environments, with a special focus on the mechanisms used for adapting the traffic to the network status. The results show how these mechanisms are able to provide a good quality in the hostile environment of the public Internet, a best effort network without delay or delivery guarantees. Both solutions are evaluated in a laboratory, where different network impairments (bandwidth limit, delay, and packet loss) are set, in both the uplink and the downlink, and the reaction of the applications is measured. The tests show how these solutions modify their packet size and interpacket time, in order to increase or reduce the sent data. One of the solutions also uses a scalable video codec, able to adapt the traffic to the network status and to the end devices. PMID:24605066

  10. Delay-bandwidth product of electromagnetically induced transparency media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tidstroem, Jonas; Jaenes, Peter; Andersson, L. Mauritz

    2007-05-15

    The limitations on the delay-bandwidth product (DBP) in an electromagnetically induced transparency medium are investigated analytically by studying the susceptibility of the system, derived through Lindblad's master equation, including dephasing. The effect of inhomogeneous broadening is treated. It is shown that the DBP for a given material is fundamentally limited by the frequency-dependent absorption, while the residual absorption limits the penetration length of a pulse. Simple expression for the optimal choice of parameters to maximize the DBP are derived. Also, the length of a device is presented as a function of DBP and control-field Rabi frequency. Supporting these results, numericalmore » calculations are carried out through the Maxwell-Bloch equations in the slowly varying envelope approximation. The results are scalable, hence they apply to the case of atoms or molecules in a gas as well as quantum dots and wells.« less

  11. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    NASA Astrophysics Data System (ADS)

    Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad

    2013-12-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.

  12. On the scalability of the Albany/FELIX first-order Stokes approximation ice sheet solver for large-scale simulations of the Greenland and Antarctic ice sheets

    DOE PAGES

    Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...

    2015-01-01

    We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less

  13. Towards a Scalable, Biomimetic, Antibacterial Coating

    NASA Astrophysics Data System (ADS)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria, without the use of any chemical antibiotic agents. Such nanotopographic coatings can be applied to implantable polymer medical devices with scalable, commercializable processes, and may deter or delay biofilm formation, potentially improving patient outcomes. This thesis also opens the door for adaptation of antibacterial, nanopillared surfaces for other applications including other medical devices, marine applications and environmental surfaces.

  14. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  15. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  16. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  17. Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David

    2017-04-12

    Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.

  18. Towards Rapid Re-Certification Using Formal Analysis

    DTIC Science & Technology

    2015-07-22

    profiles will help ensure that information assurance requirements are commensurate with risk and scalable based on an application’s changing external...20 Scalability Evaluation .......................................................................................................... 22...agility in certification processes. Software re-certification processes require significant expenditure in order to provide evidence of information

  19. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-04

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Working Towards a Scalable Model of Problem-Based Learning Instruction in Undergraduate Engineering Education

    ERIC Educational Resources Information Center

    Mantri, Archana

    2014-01-01

    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and…

  1. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  2. Scalable and High-Throughput Execution of Clinical Quality Measures from Electronic Health Records using MapReduce and the JBoss® Drools Engine

    PubMed Central

    Peterson, Kevin J.; Pathak, Jyotishman

    2014-01-01

    Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459

  3. A Systems Approach to Scalable Transportation Network Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less

  4. Progress toward scalable tomography of quantum maps using twirling-based methods and information hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, Cecilia C.; Theoretische Physik, Universitaet des Saarlandes, D-66041 Saarbruecken; Departament de Fisica, Universitat Autonoma de Barcelona, E-08193 Bellaterra

    2010-06-15

    We present in a unified manner the existing methods for scalable partial quantum process tomography. We focus on two main approaches: the one presented in Bendersky et al. [Phys. Rev. Lett. 100, 190403 (2008)] and the ones described, respectively, in Emerson et al. [Science 317, 1893 (2007)] and Lopez et al. [Phys. Rev. A 79, 042328 (2009)], which can be combined together. The methods share an essential feature: They are based on the idea that the tomography of a quantum map can be efficiently performed by studying certain properties of a twirling of such a map. From this perspective, inmore » this paper we present extensions, improvements, and comparative analyses of the scalable methods for partial quantum process tomography. We also clarify the significance of the extracted information, and we introduce interesting and useful properties of the {chi}-matrix representation of quantum maps that can be used to establish a clearer path toward achieving full tomography of quantum processes in a scalable way.« less

  5. Nanocomposites from Solution-Synthesized PbTe-BiSbTe Nanoheterostructure with Unity Figure of Merit at Low-Medium Temperatures (500-600 K).

    PubMed

    Xu, Biao; Agne, Matthias T; Feng, Tianli; Chasapis, Thomas C; Ruan, Xiulin; Zhou, Yilong; Zheng, Haimei; Bahk, Je-Hyeong; Kanatzidis, Mercouri G; Snyder, Gerald Jeffrey; Wu, Yue

    2017-03-01

    A scalable, low-temperature solution process is used to synthesize precursor material for Pb-doped Bi 0.7 Sb 1.3 Te 3 thermoelectric nanocomposites. The controllable Pb-doping leads to the increase in the optical bandgap, thus delaying the onset of bipolar conduction. Furthermore, the solution synthesis enables nanostructuring, which greatly reduces thermal conductivity. As a result, this material exhibits a zT = 1 over the 513-613 K range. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Scalable Multicast Protocols for Overlapped Groups in Broker-Based Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Chayoung; Ahn, Jinho

    In sensor networks, there are lots of overlapped multicast groups because of many subscribers, associated with their potentially varying specific interests, querying every event to sensors/publishers. And gossip based communication protocols are promising as one of potential solutions providing scalability in P(Publish)/ S(Subscribe) paradigm in sensor networks. Moreover, despite the importance of both guaranteeing message delivery order and supporting overlapped multicast groups in sensor or P2P networks, there exist little research works on development of gossip-based protocols to satisfy all these requirements. In this paper, we present two versions of causally ordered delivery guaranteeing protocols for overlapped multicast groups. The one is based on sensor-broker as delegates and the other is based on local views and delegates representing subscriber subgroups. In the sensor-broker based protocol, sensor-broker might lead to make overlapped multicast networks organized by subscriber's interests. The message delivery order has been guaranteed consistently and all multicast messages are delivered to overlapped subscribers using gossip based protocols by sensor-broker. Therefore, these features of the sensor-broker based protocol might be significantly scalable rather than those of the protocols by hierarchical membership list of dedicated groups like traditional committee protocols. And the subscriber-delegate based protocol is much stronger rather than fully decentralized protocols guaranteeing causally ordered delivery based on only local views because the message delivery order has been guaranteed consistently by all corresponding members of the groups including delegates. Therefore, this feature of the subscriber-delegate protocol is a hybrid approach improving the inherent scalability of multicast nature by gossip-based technique in all communications.

  7. Study on Dissemination Patterns in Location-Aware Gossiping Networks

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Baba, Teruyuki; Yoshikawa, Takashi; Morikawa, Hiroyuki

    We study the properties of information dissemination over location-aware gossiping networks leveraging location-based real-time communication applications. Gossiping is a promising method for quickly disseminating messages in a large-scale system, but in its application to information dissemination for location-aware applications, it is important to consider the network topology and patterns of spatial dissemination over the network in order to achieve effective delivery of messages to potentially interested users. To this end, we propose a continuous-space network model extended from Kleinberg's small-world model applicable to actual location-based applications. Analytical and simulation-based study shows that the proposed network achieves high dissemination efficiency resulting from geographically neutral dissemination patterns as well as selective dissemination to proximate users. We have designed a highly scalable location management method capable of promptly updating the network topology in response to node movement and have implemented a distributed simulator to perform dynamic target pursuit experiments as one example of applications that are the most sensitive to message forwarding delay. The experimental results show that the proposed network surpasses other types of networks in pursuit efficiency and achieves the desirable dissemination patterns.

  8. High-flux ionic diodes, ionic transistors and ionic amplifiers based on external ion concentration polarization by an ion exchange membrane: a new scalable ionic circuit platform.

    PubMed

    Sun, Gongchen; Senapati, Satyajyoti; Chang, Hsueh-Chia

    2016-04-07

    A microfluidic ion exchange membrane hybrid chip is fabricated using polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (>100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems.

  9. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  10. The Political Economy of E-Learning Educational Development Strategies, Standardisation and Scalability

    ERIC Educational Resources Information Center

    Kenney, Jacqueline; Hermens, Antoine; Clarke, Thomas

    2004-01-01

    The development of e-learning by government through policy, funding allocations, research-based collaborative projects and alliances has increased recently in both developed and under-developed nations. The paper notes that government, industry and corporate users are increasingly focusing on standardisation issues and the scalability of…

  11. Slices: A Scalable Partitioner for Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, H. Q.; Ferraro, R. D.

    1995-01-01

    A parallel partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The element based partitioner can handle mixtures of different element types. All algorithms adopted in the partitioner are scalable, including a communication template for unpredictable incoming messages, as shown in actual timing measurements.

  12. Volume server: A scalable high speed and high capacity magnetic tape archive architecture with concurrent multi-host access

    NASA Technical Reports Server (NTRS)

    Rybczynski, Fred

    1993-01-01

    A major challenge facing data processing centers today is data management. This includes the storage of large volumes of data and access to it. Current media storage for large data volumes is typically off line and frequently off site in warehouses. Access to data archived in this fashion can be subject to long delays, errors in media selection and retrieval, and even loss of data through misplacement or damage to the media. Similarly, designers responsible for architecting systems capable of continuous high-speed recording of large volumes of digital data are faced with the challenge of identifying technologies and configurations that meet their requirements. Past approaches have tended to evaluate the combination of the fastest tape recorders with the highest capacity tape media and then to compromise technology selection as a consequence of cost. This paper discusses an architecture that addresses both of these challenges and proposes a cost effective solution based on robots, high speed helical scan tape drives, and large-capacity media.

  13. Performances of the PIPER scalable child human body model in accident reconstruction

    PubMed Central

    Giordano, Chiara; Kleiven, Svein

    2017-01-01

    Human body models (HBMs) have the potential to provide significant insights into the pediatric response to impact. This study describes a scalable/posable approach to perform child accident reconstructions using the Position and Personalize Advanced Human Body Models for Injury Prediction (PIPER) scalable child HBM of different ages and in different positions obtained by the PIPER tool. Overall, the PIPER scalable child HBM managed reasonably well to predict the injury severity and location of the children involved in real-life crash scenarios documented in the medical records. The developed methodology and workflow is essential for future work to determine child injury tolerances based on the full Child Advanced Safety Project for European Roads (CASPER) accident reconstruction database. With the workflow presented in this study, the open-source PIPER scalable HBM combined with the PIPER tool is also foreseen to have implications for improved safety designs for a better protection of children in traffic accidents. PMID:29135997

  14. Scalability enhancement of AODV using local link repairing

    NASA Astrophysics Data System (ADS)

    Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.

    2014-09-01

    Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.

  15. High-flux ionic diodes, ionic transistors and ionic amplifiers based on external ion concentration polarization by an ion exchange membrane: a new scalable ionic circuit platform†

    PubMed Central

    Sun, Gongchen; Senapati, Satyajyoti

    2016-01-01

    A microfluidic-ion exchange membrane hybrid chip is fabricated by polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (> 100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems. PMID:26960551

  16. Algorithmic Coordination in Robotic Networks

    DTIC Science & Technology

    2010-11-29

    appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst

  17. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, Lee M; Sheldon, Frederick T

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less

  18. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  19. Fluorides and Other Preventive Strategies for Tooth Decay.

    PubMed

    Horst, Jeremy A; Tanzer, Jason M; Milgrom, Peter M

    2018-04-01

    We focus on scalable public health interventions that prevent and delay the development of caries and enhance resistance to dental caries lesions. These interventions should occur throughout the life cycle, and need to be age appropriate. Mitigating disease transmission and enhancing resistance are achieved through use of various fluorides, sugar substitutes, mechanical barriers such as pit-and-fissure sealants, and antimicrobials. A key aspect is counseling and other behavioral interventions that are designed to promote use of disease transmission-inhibiting and tooth resistance-enhancing agents. Advocacy for public water fluoridation and sugar taxes is an appropriate dental public health activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  1. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  2. Secure transport and adaptation of MC-EZBC video utilizing H.264-based transport protocols☆

    PubMed Central

    Hellwagner, Hermann; Hofbauer, Heinz; Kuschnig, Robert; Stütz, Thomas; Uhl, Andreas

    2012-01-01

    Universal Multimedia Access (UMA) calls for solutions where content is created once and subsequently adapted to given requirements. With regard to UMA and scalability, which is required often due to a wide variety of end clients, the best suited codecs are wavelet based (like the MC-EZBC) due to their inherent high number of scaling options. However, most transport technologies for delivering videos to end clients are targeted toward the H.264/AVC standard or, if scalability is required, the H.264/SVC. In this paper we will introduce a mapping of the MC-EZBC bitstream to existing H.264/SVC based streaming and scaling protocols. This enables the use of highly scalable wavelet based codecs on the one hand and the utilization of already existing network technologies without accruing high implementation costs on the other hand. Furthermore, we will evaluate different scaling options in order to choose the best option for given requirements. Additionally, we will evaluate different encryption options based on transport and bitstream encryption for use cases where digital rights management is required. PMID:26869746

  3. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  4. CASTOR: Widely Distributed Scalable Infospaces

    DTIC Science & Technology

    2008-11-01

    1  i Progress against Planned Objectives Enable nimble apps that react fast as...generation of scalable, reliable, ultra- fast event notification in Linux data centers. • Maelstrom, a spin-off from Ricochet, offers a powerful new option...out potential enhancements to WS-EVENTING and WS-NOTIFICATION based on our work. Potential impact for the warflighter. QSM achieves extremely fast

  5. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    DTIC Science & Technology

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα

  6. Scalable L-infinite coding of meshes.

    PubMed

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  7. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    PubMed

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  8. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  9. Optically Driven Spin Based Quantum Dots for Quantum Computing - Research Area 6 Physics 6.3.2

    DTIC Science & Technology

    2015-12-15

    quantum dots (SAQD) in Schottky diodes . Based on spins in these dots, a scalable architecture has been proposed [Adv. in Physics, 59, 703 (2010)] by us...housed in two coupled quantum dots with tunneling between them, as described above, may not be scalable but can serve as a node in a quantum network. The... tunneling -coupled two-electron spin ground states in the vertically coupled quantum dots for “universal computation” two spin qubits within the universe of

  10. A Mobile IPv6 based Distributed Mobility Management Mechanism of Mobile Internet

    NASA Astrophysics Data System (ADS)

    Yan, Shi; Jiayin, Cheng; Shanzhi, Chen

    A flatter architecture is one of the trends of mobile Internet. Traditional centralized mobility management mechanism faces the challenges such as scalability and UE reachability. A MIPv6 based distributed mobility management mechanism is proposed in this paper. Some important network entities and signaling procedures are defined. UE reachability is also considered in this paper through extension to DNS servers. Simulation results show that the proposed approach can overcome the scalability problem of the centralized scheme.

  11. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.

  12. A C-Te-based binary OTS device exhibiting excellent performance and high thermal stability for selector application.

    PubMed

    Chekol, Solomon Amsalu; Yoo, Jongmyung; Park, Jaehyuk; Song, Jeonghwan; Sung, Changhyuck; Hwang, Hyunsang

    2018-08-24

    In this letter, we demonstrate a new binary ovonic threshold switching (OTS) selector device scalable down to ø30 nm based on C-Te. Our proposed selector device exhibits outstanding performance such as a high switching ratio (I on /I off  > 10 5 ), an extremely low off-current (∼1 nA), an extremely fast operating speed of <10 ns (transition time of <2 ns and delay time of <8 ns), high endurance (10 9 ), and high thermal stability (>450 °C). The observed high thermal stability is caused by the relatively small atomic size of C, compared to Te, which can effectively suppress the segregation and crystallization of Te in the OTS film. Furthermore, to confirm the functionality of the selector in a crossbar array, we evaluated a 1S-1R device by integrating our OTS device with a ReRAM (resistive random access memory) device. The 1S-1R integrated device exhibits a successful suppression of leakage current at the half-selected cell and shows an excellent read-out margin (>2 12 word lines) in a fast read operation.

  13. An empirical evaluation of lightweight random walk based routing protocol in duty cycle aware wireless sensor networks.

    PubMed

    Mian, Adnan Noor; Fatima, Mehwish; Khan, Raees; Prakash, Ravi

    2014-01-01

    Energy efficiency is an important design paradigm in Wireless Sensor Networks (WSNs) and its consumption in dynamic environment is even more critical. Duty cycling of sensor nodes is used to address the energy consumption problem. However, along with advantages, duty cycle aware networks introduce some complexities like synchronization and latency. Due to their inherent characteristics, many traditional routing protocols show low performance in densely deployed WSNs with duty cycle awareness, when sensor nodes are supposed to have high mobility. In this paper we first present a three messages exchange Lightweight Random Walk Routing (LRWR) protocol and then evaluate its performance in WSNs for routing low data rate packets. Through NS-2 based simulations, we examine the LRWR protocol by comparing it with DYMO, a widely used WSN protocol, in both static and dynamic environments with varying duty cycles, assuming the standard IEEE 802.15.4 in lower layers. Results for the three metrics, that is, reliability, end-to-end delay, and energy consumption, show that LRWR protocol outperforms DYMO in scalability, mobility, and robustness, showing this protocol as a suitable choice in low duty cycle and dense WSNs.

  14. NASA Tech Briefs, October 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics include: Relative-Motion Sensors and Actuators for Two Optical Tables; Improved Position Sensor for Feedback Control of Levitation; Compact Tactile Sensors for Robot Fingers; Improved Ion-Channel Biosensors; Suspended-Patch Antenna With Inverted, EM-Coupled Feed; System Would Predictively Preempt Traffic Lights for Emergency Vehicles; Optical Position Encoders for High or Low Temperatures; Inter-Valence-Subband/Conduction-Band-Transport IR Detectors; Additional Drive Circuitry for Piezoelectric Screw Motors; Software for Use with Optoelectronic Measuring Tool; Coordinating Shared Activities; Software Reduces Radio-Interference Effects in Radar Data; Using Iron to Treat Chlorohydrocarbon-Contaminated Soil; Thermally Insulating, Kinematic Tensioned-Fiber Suspension; Back Actuators for Segmented Mirrors and Other Applications; Mechanism for Self-Reacted Friction Stir Welding; Lightweight Exoskeletons with Controllable Actuators; Miniature Robotic Submarine for Exploring Harsh Environments; Electron-Spin Filters Based on the Rashba Effect; Diffusion-Cooled Tantalum Hot-Electron Bolometer Mixers; Tunable Optical True-Time Delay Devices Would Exploit EIT; Fast Query-Optimized Kernel-Machine Classification; Indentured Parts List Maintenance and Part Assembly Capture Tool - IMPACT; An Architecture for Controlling Multiple Robots; Progress in Fabrication of Rocket Combustion Chambers by VPS; CHEM-Based Self-Deploying Spacecraft Radar Antennas; Scalable Multiprocessor for High-Speed Computing in Space; and Simple Systems for Detecting Spacecraft Meteoroid Punctures.

  15. Scalable architecture for a room temperature solid-state quantum information processor.

    PubMed

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  16. Scalable quantum computing based on stationary spin qubits in coupled quantum dots inside double-sided optical microcavities

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Deng, Fu-Guo

    2014-12-01

    Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.

  17. Scalable quantum computing based on stationary spin qubits in coupled quantum dots inside double-sided optical microcavities.

    PubMed

    Wei, Hai-Rui; Deng, Fu-Guo

    2014-12-18

    Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.

  18. Ultrahigh-order Maxwell solver with extreme scalability for electromagnetic PIC simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri; Vay, Jean-Luc

    2018-07-01

    The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.

  19. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  20. An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.

  1. Mechanical characterization of scalable cellulose nano-fiber based composites made using liquid composite molding process

    Treesearch

    Bamdad Barari; Thomas K. Ellingham; Issam I. Ghamhia; Krishna M. Pillai; Rani El-Hajjar; Lih-Sheng Turng; Ronald Sabo

    2016-01-01

    Plant derived cellulose nano-fibers (CNF) are a material with remarkable mechanical properties compared to other natural fibers. However, efforts to produce nano-composites on a large scale using CNF have yet to be investigated. In this study, scalable CNF nano-composites were made from isotropically porous CNF preforms using a freeze drying process. An improvised...

  2. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  3. Stigmergy based behavioural coordination for satellite clusters

    NASA Astrophysics Data System (ADS)

    Tripp, Howard; Palmer, Phil

    2010-04-01

    Multi-platform swarm/cluster missions are an attractive prospect for improved science return as they provide a natural capability for temporal, spatial and signal separation with further engineering and economic advantages. As spacecraft numbers increase and/or the round-trip communications delay from Earth lengthens, the traditional "remote-control" approach begins to break down. It is therefore essential to push control into space; to make spacecraft more autonomous. An autonomous group of spacecraft requires coordination, but standard terrestrial paradigms such as negotiation, require high levels of inter-spacecraft communication, which is nontrivial in space. This article therefore introduces the principals of stigmergy as a novel method for coordinating a cluster. Stigmergy is an agent-based, behavioural approach that allows for infrequent communication with decisions based on local information. Behaviours are selected dynamically using a genetic algorithm onboard. supervisors/ground stations occasionally adjust parameters and disseminate a "common environment" that is used for local decisions. After outlining the system, an analysis of some crucial parameters such as communications overhead and number of spacecraft is presented to demonstrate scalability. Further scenarios are considered to demonstrate the natural ability to deal with dynamic situations such as the failure of spacecraft, changing mission objectives and responding to sudden bursts of high priority tasks.

  4. Wearable Sensors Integrated with Internet of Things for Advancing eHealth Care.

    PubMed

    Bayo-Monton, Jose-Luis; Martinez-Millana, Antonio; Han, Weisi; Fernandez-Llatas, Carlos; Sun, Yan; Traver, Vicente

    2018-06-06

    Health and sociological indicators alert that life expectancy is increasing, hence so are the years that patients have to live with chronic diseases and co-morbidities. With the advancement in ICT, new tools and paradigms are been explored to provide effective and efficient health care. Telemedicine and health sensors stand as indispensable tools for promoting patient engagement, self-management of diseases and assist doctors to remotely follow up patients. In this paper, we evaluate a rapid prototyping solution for information merging based on five health sensors and two low-cost ubiquitous computing components: Arduino and Raspberry Pi. Our study, which is entirely described with the purpose of reproducibility, aimed to evaluate the extent to which portable technologies are capable of integrating wearable sensors by comparing two deployment scenarios: Raspberry Pi 3 and Personal Computer. The integration is implemented using a choreography engine to transmit data from sensors to a display unit using web services and a simple communication protocol with two modes of data retrieval. Performance of the two set-ups is compared by means of the latency in the wearable data transmission and data loss. PC has a delay of 0.051 ± 0.0035 s (max = 0.2504 s), whereas the Raspberry Pi yields a delay of 0.0175 ± 0.149 s (max = 0.294 s) for N = 300. Our analysis confirms that portable devices ( p < < 0 . 01 ) are suitable to support the transmission and analysis of biometric signals into scalable telemedicine systems.

  5. On implementing clinical decision support: achieving scalability and maintainability by combining business rules and ontologies.

    PubMed

    Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya

    2006-01-01

    We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.

  6. From fuzzy recurrence plots to scalable recurrence networks of time series

    NASA Astrophysics Data System (ADS)

    Pham, Tuan D.

    2017-04-01

    Recurrence networks, which are derived from recurrence plots of nonlinear time series, enable the extraction of hidden features of complex dynamical systems. Because fuzzy recurrence plots are represented as grayscale images, this paper presents a variety of texture features that can be extracted from fuzzy recurrence plots. Based on the notion of fuzzy recurrence plots, defuzzified, undirected, and unweighted recurrence networks are introduced. Network measures can be computed for defuzzified recurrence networks that are scalable to meet the demand for the network-based analysis of big data.

  7. Tip-Based Nanofabrication for Scalable Manufacturing

    DOE PAGES

    Hu, Huan; Kim, Hoe; Somnath, Suhas

    2017-03-16

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  8. Tip-Based Nanofabrication for Scalable Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Huan; Kim, Hoe; Somnath, Suhas

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  9. Volume-scalable high-brightness three-dimensional visible light source

    DOEpatents

    Subramania, Ganapathi; Fischer, Arthur J; Wang, George T; Li, Qiming

    2014-02-18

    A volume-scalable, high-brightness, electrically driven visible light source comprises a three-dimensional photonic crystal (3DPC) comprising one or more direct bandgap semiconductors. The improved light emission performance of the invention is achieved based on the enhancement of radiative emission of light emitters placed inside a 3DPC due to the strong modification of the photonic density-of-states engendered by the 3DPC.

  10. Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II

    DTIC Science & Technology

    2011-09-01

    for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR

  11. On noise and the performance benefit of nonblocking collectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widener, Patrick M.; Levy, Scott; Ferreira, Kurt B.

    Relaxed synchronization offers the potential of maintaining application scalability by allowing many processes to make independent progress when some processes suffer delays. Yet, the benefits of this approach in important parallel workloads have not been investigated in detail. In this paper, we use a validated simulation approach to explore the noise mitigation effects of idealized nonblocking collectives in workloads where these collectives are a major contributor to total execution time. In conclusion, although nonblocking collectives are unlikely to provide significant noise mitigation to applications in the low-OS-noise environments expected in next-generation HPC systems, we show that they can potentially improvemore » application runtime with respect to other noise types.« less

  12. A Low-Power Instruction Issue Queue for Microprocessors

    NASA Astrophysics Data System (ADS)

    Watanabe, Shingo; Chiyonobu, Akihiro; Sato, Toshinori

    Instruction issue queue is a key component which extracts instruction level parallelism (ILP) in modern out-of-order microprocessors. In order to exploit ILP for improving processor performance, instruction queue size should be increased. However, it is difficult to increase the size, since instruction queue is implemented by a content addressable memory (CAM) whose power and delay are much large. This paper introduces a low power and scalable instruction queue that replaces the CAM with a RAM. In this queue, instructions are explicitly woken up. Evaluation results show that the proposed instruction queue decreases processor performance by only 1.9% on average. Furthermore, the total energy consumption is reduced by 54% on average.

  13. On noise and the performance benefit of nonblocking collectives

    DOE PAGES

    Widener, Patrick M.; Levy, Scott; Ferreira, Kurt B.; ...

    2015-11-02

    Relaxed synchronization offers the potential of maintaining application scalability by allowing many processes to make independent progress when some processes suffer delays. Yet, the benefits of this approach in important parallel workloads have not been investigated in detail. In this paper, we use a validated simulation approach to explore the noise mitigation effects of idealized nonblocking collectives in workloads where these collectives are a major contributor to total execution time. In conclusion, although nonblocking collectives are unlikely to provide significant noise mitigation to applications in the low-OS-noise environments expected in next-generation HPC systems, we show that they can potentially improvemore » application runtime with respect to other noise types.« less

  14. Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleary, A J; Smith, S G; Vassilevska, T K

    2005-01-27

    The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallelmore » computers and maturation of the technology from an academic to a lab setting.« less

  15. Albany/FELIX: A parallel, scalable and robust, finite element, first-order Stokes approximation ice sheet solver built for advanced analysis

    DOE PAGES

    Tezaur, I. K.; Perego, M.; Salinger, A. G.; ...

    2015-04-27

    This paper describes a new parallel, scalable and robust finite element based solver for the first-order Stokes momentum balance equations for ice flow. The solver, known as Albany/FELIX, is constructed using the component-based approach to building application codes, in which mature, modular libraries developed as a part of the Trilinos project are combined using abstract interfaces and template-based generic programming, resulting in a final code with access to dozens of algorithmic and advanced analysis capabilities. Following an overview of the relevant partial differential equations and boundary conditions, the numerical methods chosen to discretize the ice flow equations are described, alongmore » with their implementation. The results of several verification studies of the model accuracy are presented using (1) new test cases for simplified two-dimensional (2-D) versions of the governing equations derived using the method of manufactured solutions, and (2) canonical ice sheet modeling benchmarks. Model accuracy and convergence with respect to mesh resolution are then studied on problems involving a realistic Greenland ice sheet geometry discretized using hexahedral and tetrahedral meshes. Also explored as a part of this study is the effect of vertical mesh resolution on the solution accuracy and solver performance. The robustness and scalability of our solver on these problems is demonstrated. Lastly, we show that good scalability can be achieved by preconditioning the iterative linear solver using a new algebraic multilevel preconditioner, constructed based on the idea of semi-coarsening.« less

  16. Variable-delay Polarization Modulators for the CLASS Telescope

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Eimer, J.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Marriage, T.; Mehrle, N.; Miller, A. D.; Miller, N.; Mirel, P.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    The challenges of measuring faint polarized signals at microwave wavelengths have motivated the development of rapid polarization modulators. One scalable technique, called a Variable-delay Polarization Modulator (VPM), consists of a stationary wire array in front of a movable mirror. The mirror motion creates a changing phase difference between the polarization modes parallel and orthogonal to the wire array. The Cosmology Large Angular Scale Surveyor (CLASS) will use a VPM as the first optical element in a telescope array that will search for the signature of inflation through the “B-mode” pattern in the polarization of the cosmic microwave background. In the CLASS VPMs, parallel transport of the mirror is maintained by a voice-coil actuated flexure system which will translate the mirror in a repeatable manner while holding tight parallelism constraints with respect to the wire array. The wire array will use 51 μm diameter copper-plated tungsten wire with 160 μm pitch over a 60 cm clear aperture. We present the status of the construction and testing of the mirror transport mechanism and wire arrays for the CLASS VPMs.

  17. The Quantum Socket: Wiring for Superconducting Qubits - Part 3

    NASA Astrophysics Data System (ADS)

    Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.

  18. NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity.

    PubMed

    Al-Awami, Ali K; Beyer, Johanna; Strobelt, Hendrik; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter; Hadwiger, Markus

    2014-12-01

    We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.

  19. Self-Organizing Distributed Architecture Supporting Dynamic Space Expanding and Reducing in Indoor LBS Environment

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2015-01-01

    Indoor location-based services (iLBS) are extremely dynamic and changeable, and include numerous resources and mobile devices. In particular, the network infrastructure requires support for high scalability in the indoor environment, and various resource lookups are requested concurrently and frequently from several locations based on the dynamic network environment. A traditional map-based centralized approach for iLBSs has several disadvantages: it requires global knowledge to maintain a complete geographic indoor map; the central server is a single point of failure; it can also cause low scalability and traffic congestion; and it is hard to adapt to a change of service area in real time. This paper proposes a self-organizing and fully distributed platform for iLBSs. The proposed self-organizing distributed platform provides a dynamic reconfiguration of locality accuracy and service coverage by expanding and contracting dynamically. In order to verify the suggested platform, scalability performance according to the number of inserted or deleted nodes composing the dynamic infrastructure was evaluated through a simulation similar to the real environment. PMID:26016908

  20. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  1. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  2. LoRa Scalability: A Simulation Model Based on Interference Measurements

    PubMed Central

    Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen

    2017-01-01

    LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data. PMID:28545239

  3. LoRa Scalability: A Simulation Model Based on Interference Measurements.

    PubMed

    Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen

    2017-05-23

    LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  4. Dynamic full-scalability conversion in scalable video coding

    NASA Astrophysics Data System (ADS)

    Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man

    2007-02-01

    For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.

  5. Energy efficient mechanisms for high-performance Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Alsaify, Baha'adnan

    2009-12-01

    Due to recent advances in microelectronics, the development of low cost, small, and energy efficient devices became possible. Those advances led to the birth of the Wireless Sensor Networks (WSNs). WSNs consist of a large set of sensor nodes equipped with communication capabilities, scattered in the area to monitor. Researchers focus on several aspects of WSNs. Such aspects include the quality of service the WSNs provide (data delivery delay, accuracy of data, etc...), the scalability of the network to contain thousands of sensor nodes (the terms node and sensor node are being used interchangeably), the robustness of the network (allowing the network to work even if a certain percentage of nodes fails), and making the energy consumption in the network as low as possible to prolong the network's lifetime. In this thesis, we present an approach that can be applied to the sensing devices that are scattered in an area for Sensor Networks. This work will use the well-known approach of using a awaking scheduling to extend the network's lifespan. We designed a scheduling algorithm that will reduce the delay's upper bound the reported data will experience, while at the same time keeps the advantages that are offered by the use of the awaking scheduling -- the energy consumption reduction which will lead to the increase in the network's lifetime. The wakeup scheduling is based on the location of the node relative to its neighbors and its distance from the Base Station (the terms Base Station and sink are being used interchangeably). We apply the proposed method to a set of simulated nodes using the "ONE Simulator". We test the performance of this approach with three other approaches -- Direct Routing technique, the well known LEACH algorithm, and a multi-parent scheduling algorithm. We demonstrate a good improvement on the network's quality of service and a reduction of the consumed energy.

  6. Scalability of transport parameters with pore sizes in isodense disordered media

    NASA Astrophysics Data System (ADS)

    Reginald, S. William; Schmitt, V.; Vallée, R. A. L.

    2014-09-01

    We study light multiple scattering in complex disordered porous materials. High internal phase emulsion-based isodense polystyrene foams are designed. Two types of samples, exhibiting different pore size distributions, are investigated for different slab thicknesses varying from L = 1 \\text{mm} to 10 \\text{mm} . Optical measurements combining steady-state and time-resolved detection are used to characterize the photon transport parameters. Very interestingly, a clear scalability of the transport mean free path \\ellt with the average size of the pores S is observed, featuring a constant velocity of the transport energy in these isodense structures. This study strongly motivates further investigations into the limits of validity of this scalability as the scattering strength of the system increases.

  7. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    NASA Astrophysics Data System (ADS)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.

  8. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  9. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  10. Autocorrel I: A Neural Network Based Network Event Correlation Approach

    DTIC Science & Technology

    2005-05-01

    which concern any component of the network. 2.1.1 Existing Intrusion Detection Systems EMERALD [8] is a distributed, scalable, hierarchal, customizable...writing this paper, the updaters of this system had not released their correlation unit to the public. EMERALD ex- plicitly divides statistical analysis... EMERALD , NetSTAT is scalable and composi- ble. QuidSCOR [12] is an open-source IDS, though it requires a subscription from its publisher, Qualys Inc

  11. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    NASA Astrophysics Data System (ADS)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  12. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  13. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  14. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  15. Processing Diabetes Mellitus Composite Events in MAGPIE.

    PubMed

    Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael

    2016-02-01

    The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.

  16. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    PubMed

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  17. Thermal stability increase in metallic nanoparticles-loaded cellulose nanocrystal nanocomposites.

    PubMed

    Goikuria, U; Larrañaga, A; Vilas, J L; Lizundia, E

    2017-09-01

    Due to the potential of CNC-based flexible materials for novel industrial applications, the aim of this work is to improve the thermal stability of cellulose nanocrystals (CNC) films through a straightforward and scalable method. Based of nanocomposite approach, five different metallic nanoparticles (ZnO, SiO 2 , TiO 2 , Al 2 O 3 and Fe 2 O 3 ) have been co-assembled in water with CNCs to obtain free-standing nanocomposite films. Thermogravimetric analysis (TGA) reveals an increased thermal stability upon nanoparticle. This increase in the thermal stability reaches a maximum of 75°C for the nanocomposites having 10wt% of Fe 2 O 3 and ZnO. The activation energies of thermodegradation process (E a ) determined according to Kissinger and Ozawa-Flynn-Wall methods further confirm the delayed degradation of CNC nanocomposites upon heating. Finally, the changes induced in the crystalline structure during thermodegradation were followed by wide angle X-ray diffraction (WAXD). It is also observed that thermal degradation proceeds at higher temperatures for nanocomposites having metallic nanoparticles. Overall, experimental findings here showed make nanocomposite approach a simple low-cost environmentally-friendly strategy to overcome the relatively poor thermal stability of CNCs when extracted via sulfuric acid assisted hydrolysis of cellulose. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Improving P2P live-content delivery using SVC

    NASA Astrophysics Data System (ADS)

    Schierl, T.; Sánchez, Y.; Hellge, C.; Wiegand, T.

    2010-07-01

    P2P content delivery techniques for video transmission have become of high interest in the last years. With the involvement of client into the delivery process, P2P approaches can significantly reduce the load and cost on servers, especially for popular services. However, previous studies have already pointed out the unreliability of P2P-based live streaming approaches due to peer churn, where peers may ungracefully leave the P2P infrastructure, typically an overlay networks. Peers ungracefully leaving the system cause connection losses in the overlay, which require repair operations. During such repair operations, which typically take a few roundtrip times, no data is received from the lost connection. While taking low delay for fast-channel tune-in into account as a key feature for broadcast-like streaming applications, the P2P live streaming approach can only rely on a certain media pre-buffer during such repair operations. In this paper, multi-tree based Application Layer Multicast as a P2P overlay technique for live streaming is considered. The use of Flow Forwarding (FF), a.k.a. Retransmission, or Forward Error Correction (FEC) in combination with Scalable video Coding (SVC) for concealment during overlay repair operations is shown. Furthermore the benefits of using SVC over the use of AVC single layer transmission are presented.

  19. Sink-oriented Dynamic Location Service Protocol for Mobile Sinks with an Energy Efficient Grid-Based Approach.

    PubMed

    Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung

    2009-01-01

    Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.

  20. "Is supervision necessary? Examining the effects of Internet-based CBT training with and without supervision": Correction to Rakovshik et al. (2016).

    PubMed

    2016-12-01

    Reports an error in "Is supervision necessary? Examining the effects of internet-based CBT training with and without supervision" by Sarah G. Rakovshik, Freda McManus, Maria Vazquez-Montes, Kate Muse and Dennis Ougrin ( Journal of Consulting and Clinical Psychology , 2016[Mar], Vol 84[3], 191-199). In the article, the department and affiliation were misspelled for author Kate Muse. The department and affiliation should have read Psychology Department, University of Worcester. All versions of this article has been corrected. (The following abstract of the original article appeared in record 2016-03513-001.) Objective: To investigate the effect of Internet-based training (IBT), with and without supervision, on therapists' (N = 61) cognitive-behavioral therapy (CBT) skills in routine clinical practice. Participants were randomized into 3 conditions: (1) Internet-based training with use of a consultation worksheet (IBT-CW); (2) Internet-based training with CBT supervision via Skype (IBT-S); and (3) "delayed-training" controls (DTs), who did not receive the training until all data collection was completed. The IBT participants received access to training over a period of 3 months. CBT skills were evaluated at pre-, mid- and posttraining/wait using assessor competence ratings of recorded therapy sessions. Hierarchical linear analysis revealed that the IBT-S participants had significantly greater CBT competence at posttraining than did IBT-CW and DT participants at both the mid- and posttraining/wait assessment points. There were no significant differences between IBT-CW and the delayed (no)-training DTs. IBT programs that include supervision may be a scalable and effective method of disseminating CBT into routine clinical practice, particularly for populations without ready access to more-traditional "live" methods of training. There was no evidence for a significant effect of IBT without supervision over a nontraining control, suggesting that merely providing access to IBT programs may not be an effective method of disseminating CBT to routine clinical practice. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  2. Scalable Self-Propagating High-Temperature Synthesis of Graphene for Supercapacitors with Superior Power Density and Cyclic Stability.

    PubMed

    Li, Chen; Zhang, Xiong; Wang, Kai; Sun, Xianzhong; Liu, Guanghua; Li, Jiangtao; Tian, Huanfang; Li, Jianqi; Ma, Yanwei

    2017-02-01

    An ultrafast self-propagating high-temperature synthesis technique offers scalable routes for the fabrication of mesoporous graphene directly from CO 2 . Due to the excellent electrical conductivity and high ion-accessible surface area, supercapacitor electrodes based on the obtained graphene exhibit superior energy and power performance. The capacitance retention is higher than 90% after one million charge/discharge cycles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Scalable fabrication of strongly textured organic semiconductor micropatterns by capillary force lithography.

    PubMed

    Jo, Pil Sung; Vailionis, Arturas; Park, Young Min; Salleo, Alberto

    2012-06-26

    Strongly textured organic semiconductor micropatterns made of the small molecule dioctylbenzothienobenzothiophene (C(8)-BTBT) are fabricated by using a method based on capillary force lithography (CFL). This technique provides the C(8)-BTBT solution with nucleation sites for directional growth, and can be used as a scalable way to produce high quality crystalline arrays in desired regions of a substrate for OFET applications. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  5. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  6. Gossip-Based Broadcast

    NASA Astrophysics Data System (ADS)

    Leitão, João; Pereira, José; Rodrigues, Luís

    Gossip, or epidemic, protocols have emerged as a powerful strategy to implement highly scalable and resilient reliable broadcast primitives on large scale peer-to-peer networks. Epidemic protocols are scalable because they distribute the load among all nodes in the system and resilient because they have an intrinsic level of redundancy that masks node and network failures. This chapter provides an introduction to gossip-based broadcast on large-scale unstructured peer-to-peer overlay networks: it surveys the main results in the field, discusses techniques to build and maintain the overlays that support efficient dissemination strategies, and provides an in-depth discussion and experimental evaluation of two concrete protocols, named HyParView and Plumtree.

  7. Blueprint for a microwave trapped ion quantum computer.

    PubMed

    Lekitsch, Bjoern; Weidt, Sebastian; Fowler, Austin G; Mølmer, Klaus; Devitt, Simon J; Wunderlich, Christof; Hensinger, Winfried K

    2017-02-01

    The availability of a universal quantum computer may have a fundamental impact on a vast number of research fields and on society as a whole. An increasingly large scientific and industrial community is working toward the realization of such a device. An arbitrarily large quantum computer may best be constructed using a modular approach. We present a blueprint for a trapped ion-based scalable quantum computer module, making it possible to create a scalable quantum computer architecture based on long-wavelength radiation quantum gates. The modules control all operations as stand-alone units, are constructed using silicon microfabrication techniques, and are within reach of current technology. To perform the required quantum computations, the modules make use of long-wavelength radiation-based quantum gate technology. To scale this microwave quantum computer architecture to a large size, we present a fully scalable design that makes use of ion transport between different modules, thereby allowing arbitrarily many modules to be connected to construct a large-scale device. A high error-threshold surface error correction code can be implemented in the proposed architecture to execute fault-tolerant operations. With appropriate adjustments, the proposed modules are also suitable for alternative trapped ion quantum computer architectures, such as schemes using photonic interconnects.

  8. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  9. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.

  10. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.

  11. Scalable wide-field optical coherence tomography-based angiography for in vivo imaging applications

    PubMed Central

    Xu, Jingjiang; Wei, Wei; Song, Shaozhen; Qi, Xiaoli; Wang, Ruikang K.

    2016-01-01

    Recent advances in optical coherence tomography (OCT)-based angiography have demonstrated a variety of biomedical applications in the diagnosis and therapeutic monitoring of diseases with vascular involvement. While promising, its imaging field of view (FOV) is however still limited (typically less than 9 mm2), which somehow slows down its clinical acceptance. In this paper, we report a high-speed spectral-domain OCT operating at 1310 nm to enable wide FOV up to 750 mm2. Using optical microangiography (OMAG) algorithm, we are able to map vascular networks within living biological tissues. Thanks to 2,048 pixel-array line scan InGaAs camera operating at 147 kHz scan rate, the system delivers a ranging depth of ~7.5 mm and provides wide-field OCT-based angiography at a single data acquisition. We implement two imaging modes (i.e., wide-field mode and high-resolution mode) in the OCT system, which gives highly scalable FOV with flexible lateral resolution. We demonstrate scalable wide-field vascular imaging for multiple finger nail beds in human and whole brain in mice with skull left intact at a single 3D scan, promising new opportunities for wide-field OCT-based angiography for many clinical applications. PMID:27231630

  12. Scalable Fabrication of Photochemically Reduced Graphene-Based Monolithic Micro-Supercapacitors with Superior Energy and Power Densities.

    PubMed

    Wang, Sen; Wu, Zhong-Shuai; Zheng, Shuanghao; Zhou, Feng; Sun, Chenglin; Cheng, Hui-Ming; Bao, Xinhe

    2017-04-25

    Micro-supercapacitors (MSCs) hold great promise as highly competitive miniaturized power sources satisfying the increased demand of smart integrated electronics. However, single-step scalable fabrication of MSCs with both high energy and power densities is still challenging. Here we demonstrate the scalable fabrication of graphene-based monolithic MSCs with diverse planar geometries and capable of superior integration by photochemical reduction of graphene oxide/TiO 2 nanoparticle hybrid films. The resulting MSCs exhibit high volumetric capacitance of 233.0 F cm -3 , exceptional flexibility, and remarkable capacity of modular serial and parallel integration in aqueous gel electrolyte. Furthermore, by precisely engineering the interface of electrode with electrolyte, these monolithic MSCs can operate well in a hydrophobic electrolyte of ionic liquid (3.0 V) at a high scan rate of 200 V s -1 , two orders of magnitude higher than those of conventional supercapacitors. More notably, the MSCs show landmark volumetric power density of 312 W cm -3 and energy density of 7.7 mWh cm -3 , both of which are among the highest values attained for carbon-based MSCs. Therefore, such monolithic MSC devices based on photochemically reduced, compact graphene films possess enormous potential for numerous miniaturized, flexible electronic applications.

  13. Enhancement of Beaconless Location-Based Routing with Signal Strength Assistance for Ad-Hoc Networks

    NASA Astrophysics Data System (ADS)

    Chen, Guowei; Itoh, Kenichi; Sato, Takuro

    Routing in Ad-hoc networks is unreliable due to the mobility of the nodes. Location-based routing protocols, unlike other protocols which rely on flooding, excel in network scalability. Furthermore, new location-based routing protocols, like, e. g. BLR [1], IGF [2], & CBF [3] have been proposed, with the feature of not requiring beacons in MAC-layer, which improve more in terms of scalability. Such beaconless routing protocols can work efficiently in dense network areas. However, these protocols' algorithms have no ability to avoid from routing into sparse areas. In this article, historical signal strength has been added as a factor into the BLR algorithm, which avoids routing into sparse area, and consequently improves the global routing efficiency.

  14. A Multibody Formulation for Three Dimensional Brick Finite Element Based Parallel and Scalable Rotor Dynamic Analysis

    DTIC Science & Technology

    2010-05-01

    connections near the hub end, and containing up to 0.48 million degrees of freedom. The models are analyzed for scala - bility and timing for hover and...Parallel and Scalable Rotor Dynamic Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...will enable the modeling of critical couplings that occur in hingeless and bearingless hubs with advanced flex structures. Second , it will enable the

  15. Fully programmable and scalable optical switching fabric for petabyte data center.

    PubMed

    Zhu, Zhonghua; Zhong, Shan; Chen, Li; Chen, Kai

    2015-02-09

    We present a converged EPS and OCS switching fabric for data center networks (DCNs) based on a distributed optical switching architecture leveraging both WDM & SDM technologies. The architecture is topology adaptive, well suited to dynamic and diverse *-cast traffic patterns. Compared to a typical folded-Clos network, the new architecture is more readily scalable to future multi-Petabyte data centers with 1000 + racks while providing a higher link bandwidth, reducing transceiver count by 50%, and improving cabling efficiency by more than 90%.

  16. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  17. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  18. Towards Scalable Entangled Photon Sources with Self-Assembled InAs /GaAs Quantum Dots

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Gong, Ming; Guo, G.-C.; He, Lixin

    2015-08-01

    The biexciton cascade process in self-assembled quantum dots (QDs) provides an ideal system for realizing deterministic entangled photon-pair sources, which are essential to quantum information science. The entangled photon pairs have recently been generated in experiments after eliminating the fine-structure splitting (FSS) of excitons using a number of different methods. Thus far, however, QD-based sources of entangled photons have not been scalable because the wavelengths of QDs differ from dot to dot. Here, we propose a wavelength-tunable entangled photon emitter mounted on a three-dimensional stressor, in which the FSS and exciton energy can be tuned independently, thereby enabling photon entanglement between dissimilar QDs. We confirm these results via atomistic pseudopotential calculations. This provides a first step towards future realization of scalable entangled photon generators for quantum information applications.

  19. The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters

    NASA Technical Reports Server (NTRS)

    Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)

    1998-01-01

    We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.

  20. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system.

    PubMed

    Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola

    2014-02-10

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.

  1. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  2. Statistical exchange-coupling errors and the practicality of scalable silicon donor qubits

    NASA Astrophysics Data System (ADS)

    Song, Yang; Das Sarma, S.

    2016-12-01

    Recent experimental efforts have led to considerable interest in donor-based localized electron spins in Si as viable qubits for a scalable silicon quantum computer. With the use of isotopically purified 28Si and the realization of extremely long spin coherence time in single-donor electrons, the recent experimental focus is on two-coupled donors with the eventual goal of a scaled-up quantum circuit. Motivated by this development, we simulate the statistical distribution of the exchange coupling J between a pair of donors under realistic donor placement straggles, and quantify the errors relative to the intended J value. With J values in a broad range of donor-pair separation ( 5 <|R |<60 nm), we work out various cases systematically, for a target donor separation R0 along the [001], [110] and [111] Si crystallographic directions, with |R0|=10 ,20 or 30 nm and standard deviation σR=1 ,2 ,5 or 10 nm. Our extensive theoretical results demonstrate the great challenge for a prescribed J gate even with just a donor pair, a first step for any scalable Si-donor-based quantum computer.

  3. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  4. Facial Expression Presentation for Real-Time Internet Communication

    NASA Astrophysics Data System (ADS)

    Dugarry, Alexandre; Berrada, Aida; Fu, Shan

    2003-01-01

    Text, voice and video images are the most common forms of media content for instant communication on the Internet. Studies have shown that facial expressions convey much richer information than text and voice during a face-to-face conversation. The currently available real time means of communication (instant text messages, chat programs and videoconferencing), however, have major drawbacks in terms of exchanging facial expression. The first two means do not involve the image transmission, whilst video conferencing requires a large bandwidth that is not always available, and the transmitted image sequence is neither smooth nor without delay. The objective of the work presented here is to develop a technique that overcomes these limitations, by extracting the facial expression of speakers and to realise real-time communication. In order to get the facial expressions, the main characteristics of the image are emphasized. Interpolation is performed on edge points previously detected to create geometric shapes such as arcs, lines, etc. The regional dominant colours of the pictures are also extracted and the combined results are subsequently converted into Scalable Vector Graphics (SVG) format. The application based on the proposed technique aims at being used simultaneously with chat programs and being able to run on any platform.

  5. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE PAGES

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson; ...

    2017-04-24

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  6. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  7. Fuzzy-Arden-Syntax-based, Vendor-agnostic, Scalable Clinical Decision Support and Monitoring Platform.

    PubMed

    Adlassnig, Klaus-Peter; Fehre, Karsten; Rappelsberger, Andrea

    2015-01-01

    This study's objective is to develop and use a scalable genuine technology platform for clinical decision support based on Arden Syntax, which was extended by fuzzy set theory and fuzzy logic. Arden Syntax is a widely recognized formal language for representing clinical and scientific knowledge in an executable format, and is maintained by Health Level Seven (HL7) International and approved by the American National Standards Institute (ANSI). Fuzzy set theory and logic permit the representation of knowledge and automated reasoning under linguistic and propositional uncertainty. These forms of uncertainty are a common feature of patients' medical data, the body of medical knowledge, and deductive clinical reasoning.

  8. Lactic acid as an invaluable green solvent for ultrasound-assisted scalable synthesis of pyrrole derivatives.

    PubMed

    Wang, Shi-Fan; Guo, Chao-Lun; Cui, Ke-Ke; Zhu, Yan-Ting; Ding, Jun-Xiong; Zou, Xin-Yue; Li, Yi-Hang

    2015-09-01

    Lactic acid has been used as a bio-based green solvent to study the ultrasound-assisted scale-up synthesis. We report here, for the first time, on the novel and scalable process for synthesis of pyrrole derivatives in lactic acid solvent under ultrasonic radiation. Eighteen pyrrole derivatives have been synthesized in lactic acid solvent under ultrasonic radiation and characterized by (1)H NMR, IR, ESI MS. The results show, under ultrasonic radiation, lactic acid solvent can overcome the scale-up challenges and exhibited many advantages, such as bio-based origin, shorter reaction time, lower volatility, higher yields, and ease of isolating the products. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  10. NYU3T: teaching, technology, teamwork: a model for interprofessional education scalability and sustainability.

    PubMed

    Djukic, Maja; Fulmer, Terry; Adams, Jennifer G; Lee, Sabrina; Triola, Marc M

    2012-09-01

    Interprofessional education is a critical precursor to effective teamwork and the collaboration of health care professionals in clinical settings. Numerous barriers have been identified that preclude scalable and sustainable interprofessional education (IPE) efforts. This article describes NYU3T: Teaching, Technology, Teamwork, a model that uses novel technologies such as Web-based learning, virtual patients, and high-fidelity simulation to overcome some of the common barriers and drive implementation of evidence-based teamwork curricula. It outlines the program's curricular components, implementation strategy, evaluation methods, and lessons learned from the first year of delivery and describes implications for future large-scale IPE initiatives. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Novel Bioreactor Platform for Scalable Cardiomyogenic Differentiation from Pluripotent Stem Cell-Derived Embryoid Bodies.

    PubMed

    Rungarunlert, Sasitorn; Ferreira, Joao N; Dinnyes, Andras

    2016-01-01

    Generation of cardiomyocytes from pluripotent stem cells (PSCs) is a common and valuable approach to produce large amount of cells for various applications, including assays and models for drug development, cell-based therapies, and tissue engineering. All these applications would benefit from a reliable bioreactor-based methodology to consistently generate homogenous PSC-derived embryoid bodies (EBs) at a large scale, which can further undergo cardiomyogenic differentiation. The goal of this chapter is to describe a scalable method to consistently generate large amount of homogeneous and synchronized EBs from PSCs. This method utilizes a slow-turning lateral vessel bioreactor to direct the EB formation and their subsequent cardiomyogenic lineage differentiation.

  12. A scalable correlator for multichannel diffuse correlation spectroscopy.

    PubMed

    Stapels, Christopher J; Kolodziejski, Noah J; McAdams, Daniel; Podolsky, Matthew J; Fernandez, Daniel E; Farkas, Dana; Christian, James F

    2016-02-01

    Diffuse correlation spectroscopy (DCS) is a technique which enables powerful and robust non-invasive optical studies of tissue micro-circulation and vascular blood flow. The technique amounts to autocorrelation analysis of coherent photons after their migration through moving scatterers and subsequent collection by single-mode optical fibers. A primary cost driver of DCS instruments are the commercial hardware-based correlators, limiting the proliferation of multi-channel instruments for validation of perfusion analysis as a clinical diagnostic metric. We present the development of a low-cost scalable correlator enabled by microchip-based time-tagging, and a software-based multi-tau data analysis method. We will discuss the capabilities of the instrument as well as the implementation and validation of 2- and 8-channel systems built for live animal and pre-clinical settings.

  13. Phonon-based scalable platform for chip-scale quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinke, Charles M.; El-Kady, Ihab

    Here, we present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton,more » may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.« less

  14. Phonon-based scalable platform for chip-scale quantum computing

    DOE PAGES

    Reinke, Charles M.; El-Kady, Ihab

    2016-12-19

    Here, we present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton,more » may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.« less

  15. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  16. A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.

    PubMed

    Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.

  17. Scalable Directed Assembly of Highly Crystalline 2,7-Dioctyl[1]benzothieno[3,2- b][1]benzothiophene (C8-BTBT) Films.

    PubMed

    Chai, Zhimin; Abbasi, Salman A; Busnaina, Ahmed A

    2018-05-30

    Assembly of organic semiconductors with ordered crystal structure has been actively pursued for electronics applications such as organic field-effect transistors (OFETs). Among various film deposition methods, solution-based film growth from small molecule semiconductors is preferable because of its low material and energy consumption, low cost, and scalability. Here, we show scalable and controllable directed assembly of highly crystalline 2,7-dioctyl[1]benzothieno[3,2- b][1]benzothiophene (C8-BTBT) films via a dip-coating process. Self-aligned stripe patterns with tunable thickness and morphology over a centimeter scale are obtained by adjusting two governing parameters: the pulling speed of a substrate and the solution concentration. OFETs are fabricated using the C8-BTBT films assembled at various conditions. A field-effect hole mobility up to 3.99 cm 2 V -1 s -1 is obtained. Owing to the highly scalable crystalline film formation, the dip-coating directed assembly process could be a great candidate for manufacturing next-generation electronics. Meanwhile, the film formation mechanism discussed in this paper could provide a general guideline to prepare other organic semiconducting films from small molecule solutions.

  18. A repeatable and scalable fabrication method for sharp, hollow silicon microneedles

    NASA Astrophysics Data System (ADS)

    Kim, H.; Theogarajan, L. S.; Pennathur, S.

    2018-03-01

    Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.

  19. Novel high-fidelity realistic explosion damage simulation for urban environments

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya

    2010-04-01

    Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.

  20. Overcoming the drawback of lower sense margin in tunnel FET based dynamic memory along with enhanced charge retention and scalability

    NASA Astrophysics Data System (ADS)

    Navlakha, Nupur; Kranti, Abhinav

    2017-11-01

    The work reports on the use of a planar tri-gate tunnel field effect transistor (TFET) to operate as dynamic memory at 85 °C with an enhanced sense margin (SM). Two symmetric gates (G1) aligned to the source at a partial region of intrinsic film result into better electrostatic control that regulates the read mechanism based on band-to-band tunneling, while the other gate (G2), positioned adjacent to the first front gate is responsible for charge storage and sustenance. The proposed architecture results in an enhanced SM of ˜1.2 μA μm-1 along with a longer retention time (RT) of ˜1.8 s at 85 °C, for a total length of 600 nm. The double gate architecture towards the source increases the tunneling current and also reduces short channel effects, enhancing SM and scalability, thereby overcoming the critical bottleneck faced by TFET based dynamic memories. The work also discusses the impact of overlap/underlap and interface charges on the performance of TFET based dynamic memory. Insights into device operation demonstrate that the choice of appropriate architecture and biases not only limit the trade-off between SM and RT, but also result in improved scalability with drain voltage and total length being scaled down to 0.8 V and 115 nm, respectively.

  1. Fully Distributed Monitoring Architecture Supporting Multiple Trackees and Trackers in Indoor Mobile Asset Management Application

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  2. Blueprint for a microwave trapped ion quantum computer

    PubMed Central

    Lekitsch, Bjoern; Weidt, Sebastian; Fowler, Austin G.; Mølmer, Klaus; Devitt, Simon J.; Wunderlich, Christof; Hensinger, Winfried K.

    2017-01-01

    The availability of a universal quantum computer may have a fundamental impact on a vast number of research fields and on society as a whole. An increasingly large scientific and industrial community is working toward the realization of such a device. An arbitrarily large quantum computer may best be constructed using a modular approach. We present a blueprint for a trapped ion–based scalable quantum computer module, making it possible to create a scalable quantum computer architecture based on long-wavelength radiation quantum gates. The modules control all operations as stand-alone units, are constructed using silicon microfabrication techniques, and are within reach of current technology. To perform the required quantum computations, the modules make use of long-wavelength radiation–based quantum gate technology. To scale this microwave quantum computer architecture to a large size, we present a fully scalable design that makes use of ion transport between different modules, thereby allowing arbitrarily many modules to be connected to construct a large-scale device. A high error–threshold surface error correction code can be implemented in the proposed architecture to execute fault-tolerant operations. With appropriate adjustments, the proposed modules are also suitable for alternative trapped ion quantum computer architectures, such as schemes using photonic interconnects. PMID:28164154

  3. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  4. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  5. A generic, cost-effective, and scalable cell lineage analysis platform

    PubMed Central

    Biezuner, Tamir; Spiro, Adam; Raz, Ofir; Amir, Shiran; Milo, Lilach; Adar, Rivka; Chapal-Ilani, Noa; Berman, Veronika; Fried, Yael; Ainbinder, Elena; Cohen, Galit; Barr, Haim M.; Halaban, Ruth; Shapiro, Ehud

    2016-01-01

    Advances in single-cell genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells. Current sequencing-based methods for cell lineage analysis depend on low-resolution bulk analysis or rely on extensive single-cell sequencing, which is not scalable and could be biased by functional dependencies. Here we show an integrated biochemical-computational platform for generic single-cell lineage analysis that is retrospective, cost-effective, and scalable. It consists of a biochemical-computational pipeline that inputs individual cells, produces targeted single-cell sequencing data, and uses it to generate a lineage tree of the input cells. We validated the platform by applying it to cells sampled from an ex vivo grown tree and analyzed its feasibility landscape by computer simulations. We conclude that the platform may serve as a generic tool for lineage analysis and thus pave the way toward large-scale human cell lineage discovery. PMID:27558250

  6. UTM Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal

    2017-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability.

  7. UTM Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2016-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability.

  8. Scalable UWB photonic generator based on the combination of doublet pulses.

    PubMed

    Moreno, Vanessa; Rius, Manuel; Mora, José; Muriel, Miguel A; Capmany, José

    2014-06-30

    We propose and experimentally demonstrate a scalable and reconfigurable optical scheme to generate high order UWB pulses. Firstly, various ultra wideband doublets are created through a process of phase-to-intensity conversion by means of a phase modulation and a dispersive media. In a second stage, doublets are combined in an optical processing unit that allows the reconfiguration of UWB high order pulses. Experimental results both in time and frequency domains are presented showing good performance related to the fractional bandwidth and spectral efficiency parameters.

  9. A Massively Parallel Code for Polarization Calculations

    NASA Astrophysics Data System (ADS)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  10. Scalable Unix tools on parallel processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, W.; Lusk, E.

    1994-12-31

    The introduction of parallel processors that run a separate copy of Unix on each process has introduced new problems in managing the user`s environment. This paper discusses some generalizations of common Unix commands for managing files (e.g. 1s) and processes (e.g. ps) that are convenient and scalable. These basic tools, just like their Unix counterparts, are text-based. We also discuss a way to use these with a graphical user interface (GUI). Some notes on the implementation are provided. Prototypes of these commands are publicly available.

  11. Scalable and reusable emulator for evaluating the performance of SS7 networks

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; Tseng, Kent H.; Lim, Koon Seng; Choe, Winston

    1994-04-01

    A scalable and reusable emulator was designed and implemented for studying the behavior of SS7 networks. The emulator design was largely based on public domain software. It was developed on top of an environment supported by PVM, the Parallel Virtual Machine, and managed by OSIMIS-the OSI Management Information Service platform. The emulator runs on top of a commercially available ATM LAN interconnecting engineering workstations. As a case study for evaluating the emulator, the behavior of the Singapore National SS7 Network under fault and unbalanced loading conditions was investigated.

  12. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  13. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  14. A real time sorting algorithm to time sort any deterministic time disordered data stream

    NASA Astrophysics Data System (ADS)

    Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.

    2017-12-01

    In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.

  15. NASA Tech Briefs, October 2011

    NASA Technical Reports Server (NTRS)

    2011-01-01

    Topics covered include: Laser Truss Sensor for Segmented Telescope Phasing; Qualifications of Bonding Process of Temperature Sensors to Deep-Space Missions; Optical Sensors for Monitoring Gamma and Neutron Radiation; Compliant Tactile Sensors; Cytometer on a Chip; Measuring Input Thresholds on an Existing Board; Scanning and Defocusing Properties of Microstrip Reflectarray Antennas; Cable Tester Box; Programmable Oscillator; Fault-Tolerant, Radiation-Hard DSP; Sub-Shot Noise Power Source for Microelectronics; Asynchronous Message Service Reference Implementation; Zero-Copy Objects System; Delay and Disruption Tolerant Networking MACHETE Model; Contact Graph Routing; Parallel Eclipse Project Checkout; Technique for Configuring an Actively Cooled Thermal Shield in a Flight System; Use of Additives to Improve Performance of Methyl Butyrate-Based Lithium-Ion Electrolytes; Li-Ion Cells Employing Electrolytes with Methyl Propionate and Ethyl Butyrate Co-Solvents; Improved Devices for Collecting Sweat for Chemical Analysis; Tissue Photolithography; Method for Impeding Degradation of Porous Silicon Structures; External Cooling Coupled to Reduced Extremity Pressure Device; A Zero-Gravity Cup for Drinking Beverages in Microgravity; Co-Flow Hollow Cathode Technology; Programmable Aperture with MEMS Microshutter Arrays; Polished Panel Optical Receiver for Simultaneous RF/Optical Telemetry with Large DSN Antennas; Adaptive System Modeling for Spacecraft Simulation; Lidar-Based Navigation Algorithm for Safe Lunar Landing; Tracking Object Existence From an Autonomous Patrol Vehicle; Rad-Hard, Miniaturized, Scalable, High-Voltage Switching Module for Power Applications; and Architecture for a 1-GHz Digital RADAR.

  16. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.

  17. ADEpedia: a scalable and standardized knowledge base of Adverse Drug Events using semantic web technology.

    PubMed

    Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G

    2011-01-01

    A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.

  18. High-fidelity cluster state generation for ultracold atoms in an optical lattice.

    PubMed

    Inaba, Kensuke; Tokunaga, Yuuki; Tamaki, Kiyoshi; Igeta, Kazuhiro; Yamashita, Makoto

    2014-03-21

    We propose a method for generating high-fidelity multipartite spin entanglement of ultracold atoms in an optical lattice in a short operation time with a scalable manner, which is suitable for measurement-based quantum computation. To perform the desired operations based on the perturbative spin-spin interactions, we propose to actively utilize the extra degrees of freedom (DOFs) usually neglected in the perturbative treatment but included in the Hubbard Hamiltonian of atoms, such as, (pseudo-)charge and orbital DOFs. Our method simultaneously achieves high fidelity, short operation time, and scalability by overcoming the following fundamental problem: enhancing the interaction strength for shortening the operation time breaks the perturbative condition of the interaction and inevitably induces unwanted correlations among the spin and extra DOFs.

  19. OXC management and control system architecture with scalability, maintenance, and distributed managing environment

    NASA Astrophysics Data System (ADS)

    Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun

    2002-07-01

    In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.

  20. Scalable imprinting of shape-specific polymeric nanocarriers using a release layer of switchable water solubility.

    PubMed

    Agarwal, Rachit; Singh, Vikramjit; Jurney, Patrick; Shi, Li; Sreenivasan, S V; Roy, Krishnendu

    2012-03-27

    There is increasing interest in fabricating shape-specific polymeric nano- and microparticles for efficient delivery of drugs and imaging agents. The size and shape of these particles could significantly influence their transport properties and play an important role in in vivo biodistribution, targeting, and cellular uptake. Nanoimprint lithography methods, such as jet-and-flash imprint lithography (J-FIL), provide versatile top-down processes to fabricate shape-specific, biocompatible nanoscale hydrogels that can deliver therapeutic and diagnostic molecules in response to disease-specific cues. However, the key challenges in top-down fabrication of such nanocarriers are scalable imprinting with biological and biocompatible materials, ease of particle-surface modification using both aqueous and organic chemistry as well as simple yet biocompatible harvesting. Here we report that a biopolymer-based sacrificial release layer in combination with improved nanocarrier-material formulation can address these challenges. The sacrificial layer improves scalability and ease of imprint-surface modification due to its switchable solubility through simple ion exchange between monovalent and divalent cations. This process enables large-scale bionanoimprinting and efficient, one-step harvesting of hydrogel nanoparticles in both water- and organic-based imprint solutions. © 2012 American Chemical Society

  1. Space-Filling Supercapacitor Carpets: Highly scalable fractal architecture for energy storage

    NASA Astrophysics Data System (ADS)

    Tiliakos, Athanasios; Trefilov, Alexandra M. I.; Tanasǎ, Eugenia; Balan, Adriana; Stamatin, Ioan

    2018-04-01

    Revamping ground-breaking ideas from fractal geometry, we propose an alternative micro-supercapacitor configuration realized by laser-induced graphene (LIG) foams produced via laser pyrolysis of inexpensive commercial polymers. The Space-Filling Supercapacitor Carpet (SFSC) architecture introduces the concept of nested electrodes based on the pre-fractal Peano space-filling curve, arranged in a symmetrical equilateral setup that incorporates multiple parallel capacitor cells sharing common electrodes for maximum efficiency and optimal length-to-area distribution. We elucidate on the theoretical foundations of the SFSC architecture, and we introduce innovations (high-resolution vector-mode printing) in the LIG method that allow for the realization of flexible and scalable devices based on low iterations of the Peano algorithm. SFSCs exhibit distributed capacitance properties, leading to capacitance, energy, and power ratings proportional to the number of nested electrodes (up to 4.3 mF, 0.4 μWh, and 0.2 mW for the largest tested model of low iteration using aqueous electrolytes), with competitively high energy and power densities. This can pave the road for full scalability in energy storage, reaching beyond the scale of micro-supercapacitors for incorporating into larger and more demanding applications.

  2. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    PubMed

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  3. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    PubMed Central

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-01-01

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829

  4. Using Swarming Agents for Scalable Security in Large Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouse, Michael; White, Jacob L.; Fulp, Errin W.

    2011-09-23

    The difficulty of securing computer infrastructures increases as they grow in size and complexity. Network-based security solutions such as IDS and firewalls cannot scale because of exponentially increasing computational costs inherent in detecting the rapidly growing number of threat signatures. Hostbased solutions like virus scanners and IDS suffer similar issues, and these are compounded when enterprises try to monitor these in a centralized manner. Swarm-based autonomous agent systems like digital ants and artificial immune systems can provide a scalable security solution for large network environments. The digital ants approach offers a biologically inspired design where each ant in the virtualmore » colony can detect atoms of evidence that may help identify a possible threat. By assembling the atomic evidences from different ant types the colony may detect the threat. This decentralized approach can require, on average, fewer computational resources than traditional centralized solutions; however there are limits to its scalability. This paper describes how dividing a large infrastructure into smaller managed enclaves allows the digital ant framework to effectively operate in larger environments. Experimental results will show that using smaller enclaves allows for more consistent distribution of agents and results in faster response times.« less

  5. Perovskite Technology is Scalable, But Questions Remain about the Best

    Science.gov Websites

    Methods | News | NREL Perovskite Technology is Scalable, But Questions Remain about the Best Methods News Release: Perovskite Technology is Scalable, But Questions Remain about the Best Methods NREL be used on a larger surface. The NREL researchers examined potential scalable deposition methods

  6. Transportation Network Topologies

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.; Scott, John M.

    2004-01-01

    A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21PstP thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the 'Southwest Effect'). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.

  7. Transportation Network Topologies

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.; Scott, John

    2004-01-01

    A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21st thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the Southwest Effect). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.

  8. Intrinsically Stretchable and Conductive Textile by a Scalable Process for Elastic Wearable Electronics.

    PubMed

    Wang, Chunya; Zhang, Mingchao; Xia, Kailun; Gong, Xueqin; Wang, Huimin; Yin, Zhe; Guan, Baolu; Zhang, Yingying

    2017-04-19

    The prosperous development of stretchable electronics poses a great demand on stretchable conductive materials that could maintain their electrical conductivity under tensile strain. Previously reported strategies to obtain stretchable conductors usually involve complex structure-fabricating processes or utilization of high-cost nanomaterials. It remains a great challenge to produce stretchable and conductive materials via a scalable and cost-effective process. Herein, a large-scalable pyrolysis strategy is developed for the fabrication of intrinsically stretchable and conductive textile in utilizing low-cost and mass-produced weft-knitted textiles as raw materials. Due to the intrinsic stretchability of the weft-knitted structure and the excellent mechanical and electrical properties of the as-obtained carbonized fibers, the obtained flexible and durable textile could sustain tensile strains up to 125% while keeping a stable electrical conductivity (as shown by a Modal-based textile), thus ensuring its applications in elastic electronics. For demonstration purposes, stretchable supercapacitors and wearable thermal-therapy devices that showed stable performance with the loading of tensile strains have been fabricated. Considering the simplicity and large scalability of the process, the low-cost and mass production of the raw materials, and the superior performances of the as-obtained elastic and conductive textile, this strategy would contribute to the development and industrial production of wearable electronics.

  9. A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes

    PubMed Central

    Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme. PMID:28072851

  10. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    PubMed Central

    Colli Franzone, Piero; Pavarino, Luca F.; Scacchi, Simone

    2018-01-01

    We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing) architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1) the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2) the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3) the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4) the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks. PMID:29674971

  11. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    PubMed Central

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  12. An EKV-based high voltage MOSFET model with improved mobility and drift model

    NASA Astrophysics Data System (ADS)

    Chauhan, Yogesh Singh; Gillon, Renaud; Bakeroot, Benoit; Krummenacher, Francois; Declercq, Michel; Ionescu, Adrian Mihai

    2007-11-01

    An EKV-based high voltage MOSFET model is presented. The intrinsic channel model is derived based on the charge based EKV-formalism. An improved mobility model is used for the modeling of the intrinsic channel to improve the DC characteristics. The model uses second order dependence on the gate bias and an extra parameter for the smoothening of the saturation voltage of the intrinsic drain. An improved drift model [Chauhan YS, Anghel C, Krummenacher F, Ionescu AM, Declercq M, Gillon R, et al. A highly scalable high voltage MOSFET model. In: IEEE European solid-state device research conference (ESSDERC), September 2006. p. 270-3; Chauhan YS, Anghel C, Krummenacher F, Maier C, Gillon R, Bakeroot B, et al. Scalable general high voltage MOSFET model including quasi-saturation and self-heating effect. Solid State Electron 2006;50(11-12):1801-13] is used for the modeling of the drift region, which gives smoother transition on output characteristics and also models well the quasi-saturation region of high voltage MOSFETs. First, the model is validated on the numerical device simulation of the VDMOS transistor and then, on the measured characteristics of the SOI-LDMOS transistor. The accuracy of the model is better than our previous model [Chauhan YS, Anghel C, Krummenacher F, Maier C, Gillon R, Bakeroot B, et al. Scalable general high voltage MOSFET model including quasi-saturation and self-heating effect. Solid State Electron 2006;50(11-12):1801-13] especially in the quasi-saturation region of output characteristics.

  13. Towards Scalable Strain Gauge-Based Joint Torque Sensors.

    PubMed

    Khan, Hamza; D'Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G; Cuschieri, Alfred; Semini, Claudio

    2017-08-18

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS) , the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot- MiniHyQ . This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR).

  14. Ionospheric Delay Compensation Using a Scale Factor Based on an Altitude of a Receiver

    NASA Technical Reports Server (NTRS)

    Zhao, Hui (Inventor); Savoy, John (Inventor)

    2014-01-01

    In one embodiment, a method for ionospheric delay compensation is provided. The method includes determining an ionospheric delay based on a signal having propagated from the navigation satellite to a location below the ionosphere. A scale factor can be applied to the ionospheric delay, wherein the scale factor corresponds to a ratio of an ionospheric delay in the vertical direction based on an altitude of the satellite navigation system receiver. Compensation can be applied based on the ionospheric delay.

  15. Scalable web services for the PSIPRED Protein Analysis Workbench.

    PubMed

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  16. Unmanned Aircraft Systems Traffic Management (UTM) Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2017-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability

  17. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  18. Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits

    NASA Astrophysics Data System (ADS)

    Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng

    2017-11-01

    We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.

  19. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  20. Execution of a parallel edge-based Navier-Stokes solver on commodity graphics processor units

    NASA Astrophysics Data System (ADS)

    Corral, Roque; Gisbert, Fernando; Pueblas, Jesus

    2017-02-01

    The implementation of an edge-based three-dimensional Reynolds Average Navier-Stokes solver for unstructured grids able to run on multiple graphics processing units (GPUs) is presented. Loops over edges, which are the most time-consuming part of the solver, have been written to exploit the massively parallel capabilities of GPUs. Non-blocking communications between parallel processes and between the GPU and the central processor unit (CPU) have been used to enhance code scalability. The code is written using a mixture of C++ and OpenCL, to allow the execution of the source code on GPUs. The Message Passage Interface (MPI) library is used to allow the parallel execution of the solver on multiple GPUs. A comparative study of the solver parallel performance is carried out using a cluster of CPUs and another of GPUs. It is shown that a single GPU is up to 64 times faster than a single CPU core. The parallel scalability of the solver is mainly degraded due to the loss of computing efficiency of the GPU when the size of the case decreases. However, for large enough grid sizes, the scalability is strongly improved. A cluster featuring commodity GPUs and a high bandwidth network is ten times less costly and consumes 33% less energy than a CPU-based cluster with an equivalent computational power.

  1. The TOTEM DAQ based on the Scalable Readout System (SRS)

    NASA Astrophysics Data System (ADS)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  2. Tailoring the transverse mode of a high-finesse optical resonator with stepped mirrors

    NASA Astrophysics Data System (ADS)

    Högner, M.; Saule, T.; Lilienfein, N.; Pervak, V.; Pupeza, I.

    2018-02-01

    Enhancement cavities (ECs) seeded with femtosecond pulses have developed into the most powerful technique for high-order harmonic generation (HHG) at repetition rates in the tens of MHz. Here, we demonstrate the feasibility of controlling the phase front of the excited transverse eigenmode of a ring EC by using mirrors with stepped surface profiles, while maintaining the high finesse required to reach the peak intensities necessary for HHG. The two lobes of a {{TEM}}01 mode of a 3.93 m long EC, seeded with a single-frequency laser, are delayed by 15.6 fs with respect to each other before a tight focus, and the delay is reversed after the focus. The tailored transverse mode exhibits an on-axis intensity maximum in the focus. Furthermore, the geometry is designed to generate a rotating wavefront in the focus when few-cycle pulses circulate in the EC. This paves the way to gating isolated attosecond pulses (IAPs) in a transverse manner (similarly to the attosecond lighthouse), heralding IAPs at repetition rates well into the multi-10 MHz range. In addition, these results promise high-efficiency harmonic output coupling from ECs in general, with an unparalleled power scalability. These prospects are expected to tremendously benefit photoelectron spectroscopy and extreme-ultraviolet frequency comb spectroscopy.

  3. A Cluster-Based Framework for the Security of Medical Sensor Environments

    NASA Astrophysics Data System (ADS)

    Klaoudatou, Eleni; Konstantinou, Elisavet; Kambourakis, Georgios; Gritzalis, Stefanos

    The adoption of Wireless Sensor Networks (WSNs) in the healthcare sector poses many security issues, mainly because medical information is considered particularly sensitive. The security mechanisms employed are expected to be more efficient in terms of energy consumption and scalability in order to cope with the constrained capabilities of WSNs and patients’ mobility. Towards this goal, cluster-based medical WSNs can substantially improve efficiency and scalability. In this context, we have proposed a general framework for cluster-based medical environments on top of which security mechanisms can rely. This framework fully covers the varying needs of both in-hospital environments and environments formed ad hoc for medical emergencies. In this paper, we further elaborate on the security of our proposed solution. We specifically focus on key establishment mechanisms and investigate the group key agreement protocols that can best fit in our framework.

  4. Multi-service small-cell cloud wired/wireless access network based on tunable optical frequency comb

    NASA Astrophysics Data System (ADS)

    Xiang, Yu; Zhou, Kun; Yang, Liu; Pan, Lei; Liao, Zhen-wan; Zhang, Qiang

    2015-11-01

    In this paper, we demonstrate a novel multi-service wired/wireless integrated access architecture of cloud radio access network (C-RAN) based on radio-over-fiber passive optical network (RoF-PON) system, which utilizes scalable multiple- frequency millimeter-wave (MF-MMW) generation based on tunable optical frequency comb (TOFC). In the baseband unit (BBU) pool, the generated optical comb lines are modulated into wired, RoF and WiFi/WiMAX signals, respectively. The multi-frequency RoF signals are generated by beating the optical comb line pairs in the small cell. The WiFi/WiMAX signals are demodulated after passing through the band pass filter (BPF) and band stop filter (BSF), respectively, whereas the wired signal can be received directly. The feasibility and scalability of the proposed multi-service wired/wireless integrated C-RAN are confirmed by the simulations.

  5. Enabling Cross-Platform Clinical Decision Support through Web-Based Decision Support in Commercial Electronic Health Record Systems: Proposal and Evaluation of Initial Prototype Implementations

    PubMed Central

    Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku

    2013-01-01

    Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426

  6. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks.

    PubMed

    Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren

    2018-04-16

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.

  7. A complexity-scalable software-based MPEG-2 video encoder.

    PubMed

    Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin

    2004-05-01

    With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.

  8. Scalable algorithms for three-field mixed finite element coupled poromechanics

    NASA Astrophysics Data System (ADS)

    Castelletto, Nicola; White, Joshua A.; Ferronato, Massimiliano

    2016-12-01

    We introduce a class of block preconditioners for accelerating the iterative solution of coupled poromechanics equations based on a three-field formulation. The use of a displacement/velocity/pressure mixed finite-element method combined with a first order backward difference formula for the approximation of time derivatives produces a sequence of linear systems with a 3 × 3 unsymmetric and indefinite block matrix. The preconditioners are obtained by approximating the two-level Schur complement with the aid of physically-based arguments that can be also generalized in a purely algebraic approach. A theoretical and experimental analysis is presented that provides evidence of the robustness, efficiency and scalability of the proposed algorithm. The performance is also assessed for a real-world challenging consolidation experiment of a shallow formation.

  9. Sustainability and scalability of a volunteer-based primary care intervention (Health TAPESTRY): a mixed-methods analysis.

    PubMed

    Kastner, Monika; Sayal, Radha; Oliver, Doug; Straus, Sharon E; Dolovich, Lisa

    2017-08-01

    Chronic diseases are a significant public health concern, particularly in older adults. To address the delivery of health care services to optimally meet the needs of older adults with multiple chronic diseases, Health TAPESTRY (Teams Advancing Patient Experience: Strengthening Quality) uses a novel approach that involves patient home visits by trained volunteers to collect and transmit relevant health information using e-health technology to inform appropriate care from an inter-professional healthcare team. Health TAPESTRY was implemented, pilot tested, and evaluated in a randomized controlled trial (analysis underway). Knowledge translation (KT) interventions such as Health TAPESTRY should involve an investigation of their sustainability and scalability determinants to inform further implementation. However, this is seldom considered in research or considered early enough, so the objectives of this study were to assess the sustainability and scalability potential of Health TAPESTRY from the perspective of the team who developed and pilot-tested it. Our objectives were addressed using a sequential mixed-methods approach involving the administration of a validated, sustainability survey developed by the National Health Service (NHS) to all members of the Health TAPESTRY team who were actively involved in the development, implementation and pilot evaluation of the intervention (Phase 1: n = 38). Mean sustainability scores were calculated to identify the best potential for improvement across sustainability factors. Phase 2 was a qualitative study of interviews with purposively selected Health TAPESTRY team members to gain a more in-depth understanding of the factors that influence the sustainability and scalability Health TAPESTRY. Two independent reviewers coded transcribed interviews and completed a multi-step thematic analysis. Outcomes were participant perceptions of the determinants influencing the sustainability and scalability of Health TAPESTRY. Twenty Health TAPESTRY team members (53% response rate) completed the NHS sustainability survey. The overall mean sustainability score was 64.6 (range 22.8-96.8). Important opportunities for improving sustainability were better staff involvement and training, clinical leadership engagement, and infrastructure for sustainability. Interviews with 25 participants (response rate 60%) showed that factors influencing the sustainability and scalability of Health TAPESTRY emerged across two dimensions: I) Health TAPESTRY operations (development and implementation activities undertaken by the central team); and II) the Health TAPESTRY intervention (factors specific to the intervention and its elements). Resource capacity appears to be an important factor to consider for Health TAPESTRY operations as it was identified across both sustainability and scalability factors; and perceived lack of interprofessional team and volunteer resource capacity and the need for stakeholder buy-in are important considerations for the Health TAPESTRY intervention. We used these findings to create actionable recommendations to initiate dialogue among Health TAPESTRY team members to improve the intervention. Our study identified sustainability and scalability determinants of the Health TAPESTRY intervention that can be used to optimize its potential for impact. Next steps will involve using findings to inform a guide to facilitate sustainability and scalability of Health TAPESTRY in other jurisdictions considering its adoption. Our findings build on the limited current knowledge of sustainability, and advances KT science related to the sustainability and scalability of KT interventions.

  10. Efficient and Scalable Graph Similarity Joins in MapReduce

    PubMed Central

    Chen, Yifan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135

  11. Efficient and scalable graph similarity joins in MapReduce.

    PubMed

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  12. Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel

    2011-01-01

    The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less

  13. The ever-expanding role of asymmetric covalent organocatalysis in scalable, natural product synthesis.

    PubMed

    Abbasov, Mikail E; Romo, Daniel

    2014-10-01

    Following the turn of the millennium, the role of asymmetric covalent organocatalysis has developed into a scalable, synthetic paradigm galvanizing the synthetic community toward utilization of these methods toward more practical, metal-free syntheses of natural products. A myriad of reports on asymmetric organocatalytic modes of substrate activation relying on small, exclusively organic molecules are delineating what has now become the multifaceted field of organocatalysis. In covalent catalysis, the catalyst and substrate combine to first form a covalent, activated intermediate that enters the catalytic cycle. Following asymmetric bond formation, the chiral catalyst is recycled through hydrolysis or displacement by a pendant group on the newly formed product. Amine- and phosphine-based organocatalysts are the most common examples that have led to a vast array of reaction types. This Highlight provides a brief overview of covalent modes of organocatalysis and applications of scalable versions of these methods applied to the total synthesis of natural products including examples from our own laboratory.

  14. SFT: Scalable Fault Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparentmore » and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.« less

  15. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  16. Triboelectric Charging at the Nanostructured Solid/Liquid Interface for Area-Scalable Wave Energy Conversion and Its Use in Corrosion Protection.

    PubMed

    Zhao, Xue Jiao; Zhu, Guang; Fan, You Jun; Li, Hua Yang; Wang, Zhong Lin

    2015-07-28

    We report a flexible and area-scalable energy-harvesting technique for converting kinetic wave energy. Triboelectrification as a result of direct interaction between a dynamic wave and a large-area nanostructured solid surface produces an induced current among an array of electrodes. An integration method ensures that the induced current between any pair of electrodes can be constructively added up, which enables significant enhancement in output power and realizes area-scalable integration of electrode arrays. Internal and external factors that affect the electric output are comprehensively discussed. The produced electricity not only drives small electronics but also achieves effective impressed current cathodic protection. This type of thin-film-based device is a potentially practical solution of on-site sustained power supply at either coastal or off-shore sites wherever a dynamic wave is available. Potential applications include corrosion protection, pollution degradation, water desalination, and wireless sensing for marine surveillance.

  17. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network

    PubMed Central

    Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.

    2013-01-01

    Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567

  18. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network.

    PubMed

    Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G

    2013-01-01

    Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.

  19. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  20. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  1. Parallel and Scalable Clustering and Classification for Big Data in Geosciences

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2015-12-01

    Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.

  2. Molecular engineering with artificial atoms: designing a material platform for scalable quantum spintronics and photonics

    NASA Astrophysics Data System (ADS)

    Doty, Matthew F.; Ma, Xiangyu; Zide, Joshua M. O.; Bryant, Garnett W.

    2017-09-01

    Self-assembled InAs Quantum Dots (QDs) are often called "artificial atoms" and have long been of interest as components of quantum photonic and spintronic devices. Although there has been substantial progress in demonstrating optical control of both single spins confined to a single QD and entanglement between two separated QDs, the path toward scalable quantum photonic devices based on spins remains challenging. Quantum Dot Molecules, which consist of two closely-spaced InAs QDs, have unique properties that can be engineered with the solid state analog of molecular engineering in which the composition, size, and location of both the QDs and the intervening barrier are controlled during growth. Moreover, applied electric, magnetic, and optical fields can be used to modulate, in situ, both the spin and optical properties of the molecular states. We describe how the unique photonic properties of engineered Quantum Dot Molecules can be leveraged to overcome long-standing challenges to the creation of scalable quantum devices that manipulate single spins via photonics.

  3. Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Glasser, Alan H.

    2008-11-01

    Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.

  4. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tock, Yoav; Mandler, Benjamin; Moreira, Jose

    2013-01-01

    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliencymore » and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.« less

  5. Scalable continuous flow synthesis of ZnO nanorod arrays in 3-D ceramic honeycomb substrates for low-temperature desulfurization

    DOE PAGES

    Wang, Sibo; Wu, Yunchao; Miao, Ran; ...

    2017-07-26

    Scalable and cost-effective synthesis and assembly of technologically important nanostructures in three-dimensional (3D) substrates hold keys to bridge the demonstrated nanotechnologies in academia with industrially relevant scalable manufacturing. In this paper, using ZnO nanorod arrays as an example, a hydrothermal-based continuous flow synthesis (CFS) method is successfully used to integrate the nano-arrays in multi-channeled monolithic cordierite. Compared to the batch process, CFS enhances the average growth rate of nano-arrays by 125%, with the average length increasing from 2 μm to 4.5 μm within the same growth time of 4 hours. The precursor utilization efficiency of CFS is enhanced by 9more » times compared to that of batch process by preserving the majority of precursors in recyclable solution. Computational fluid dynamic simulation suggests a steady-state solution flow and mass transport inside the channels of honeycomb substrates, giving rise to steady and consecutive growth of ZnO nano-arrays with an average length of 10 μm in 12 h. The monolithic ZnO nano-array-integrated cordierite obtained through CFS shows enhanced low-temperature (200 °C) desulfurization capacity and recyclability in comparison to ZnO powder wash-coated cordierite. This can be attributed to exposed ZnO {101¯0} planes, better dispersion and stronger interactions between sorbent and reactant in the ZnO nanorod arrays, as well as the sintering-resistance of nano-array configurations during sulfidation–regeneration cycles. Finally, with the demonstrated scalable synthesis and desulfurization performance of ZnO nano-arrays, a promising, industrially relevant integration strategy is provided to fabricate metal oxide nano-array-based monolithic devices for various environmental and energy applications.« less

  6. Scalable continuous flow synthesis of ZnO nanorod arrays in 3-D ceramic honeycomb substrates for low-temperature desulfurization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Sibo; Wu, Yunchao; Miao, Ran

    Scalable and cost-effective synthesis and assembly of technologically important nanostructures in three-dimensional (3D) substrates hold keys to bridge the demonstrated nanotechnologies in academia with industrially relevant scalable manufacturing. In this paper, using ZnO nanorod arrays as an example, a hydrothermal-based continuous flow synthesis (CFS) method is successfully used to integrate the nano-arrays in multi-channeled monolithic cordierite. Compared to the batch process, CFS enhances the average growth rate of nano-arrays by 125%, with the average length increasing from 2 μm to 4.5 μm within the same growth time of 4 hours. The precursor utilization efficiency of CFS is enhanced by 9more » times compared to that of batch process by preserving the majority of precursors in recyclable solution. Computational fluid dynamic simulation suggests a steady-state solution flow and mass transport inside the channels of honeycomb substrates, giving rise to steady and consecutive growth of ZnO nano-arrays with an average length of 10 μm in 12 h. The monolithic ZnO nano-array-integrated cordierite obtained through CFS shows enhanced low-temperature (200 °C) desulfurization capacity and recyclability in comparison to ZnO powder wash-coated cordierite. This can be attributed to exposed ZnO {101¯0} planes, better dispersion and stronger interactions between sorbent and reactant in the ZnO nanorod arrays, as well as the sintering-resistance of nano-array configurations during sulfidation–regeneration cycles. Finally, with the demonstrated scalable synthesis and desulfurization performance of ZnO nano-arrays, a promising, industrially relevant integration strategy is provided to fabricate metal oxide nano-array-based monolithic devices for various environmental and energy applications.« less

  7. Developing and Trialling an independent, scalable and repeatable IT-benchmarking procedure for healthcare organisations.

    PubMed

    Liebe, J D; Hübner, U

    2013-01-01

    Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.

  8. Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.

    PubMed

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2016-01-01

    The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language (VHDL) and validated using an Field-Programmable Gate Array (FPGA) implementation.

  9. Sandboxes for Model-Based Inquiry

    ERIC Educational Resources Information Center

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-01-01

    In this article, we introduce a class of constructionist learning environments that we call "Emergent Systems Sandboxes" ("ESSs"), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual…

  10. Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.

    2017-12-01

    THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.

  11. Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things

    PubMed Central

    Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet

    2017-01-01

    As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things. PMID:28696393

  12. Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things.

    PubMed

    Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet; Hoebeke, Jeroen

    2017-07-11

    As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.

  13. Can lipid nanoparticles improve intestinal absorption?

    PubMed

    Mendes, M; Soares, H T; Arnaut, L G; Sousa, J J; Pais, A A C C; Vitorino, C

    2016-12-30

    Lipid nanoparticles and their multiple designs have been considered appealing nanocarrier systems. Bringing the benefits of these nanosystems together with conventional coating technology clearly results in product differentiation. This work aimed at developing an innovative solid dosage form for oral administration based on tableting nanostructured lipid carriers (NLC), coated with conventional polymer agents. NLC dispersions co-encapsulating olanzapine and simvastatin (Combo-NLC) were produced by high pressure homogenization, and evaluated in terms of scalability, drying procedure, tableting and performance from in vitro release, cytotoxicity and intestinal permeability stand points. Factorial design indicated that the scaling-up of the NLC production is clearly feasible. Spray-drying was the method selected to obtain dry particles, not only because it consists of a single step procedure, but also because it facilitates the coating process of NLC with different polymers. Modified NLC formulations with the polymers allowed obtaining distinct release mechanisms, comprising immediate, delayed and prolonged release. Sureteric:Combo-NLC provided a low cytotoxicity profile, along with a ca. 12-fold OL/3-fold SV higher intestinal permeability, compared to those obtained with commercial tablets. Such findings can be ascribed to drug protection and control over release promoted by NLC, supporting them as a versatile platform able to be modified according to the intended needs. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Logic-Based Retrieval: Technology for Content-Oriented and Analytical Querying of Patent Data

    NASA Astrophysics Data System (ADS)

    Klampanos, Iraklis Angelos; Wu, Hengzhi; Roelleke, Thomas; Azzam, Hany

    Patent searching is a complex retrieval task. An initial document search is only the starting point of a chain of searches and decisions that need to be made by patent searchers. Keyword-based retrieval is adequate for document searching, but it is not suitable for modelling comprehensive retrieval strategies. DB-like and logical approaches are the state-of-the-art techniques to model strategies, reasoning and decision making. In this paper we present the application of logical retrieval to patent searching. The two grand challenges are expressiveness and scalability, where high degree of expressiveness usually means a loss in scalability. In this paper we report how to maintain scalability while offering the expressiveness of logical retrieval required for solving patent search tasks. We present logical retrieval background, and how to model data-source selection and results' fusion. Moreover, we demonstrate the modelling of a retrieval strategy, a technique by which patent professionals are able to express, store and exchange their strategies and rationales when searching patents or when making decisions. An overview of the architecture and technical details complement the paper, while the evaluation reports preliminary results on how query processing times can be guaranteed, and how quality is affected by trading off responsiveness.

  15. Segmentation of breast ultrasound images based on active contours using neutrosophic theory.

    PubMed

    Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A

    2018-04-01

    Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.

  16. Large-scale generation of human iPSC-derived neural stem cells/early neural progenitor cells and their neuronal differentiation.

    PubMed

    D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L

    2014-01-01

    Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.

  17. Scalable microcarrier-based manufacturing of mesenchymal stem/stromal cells.

    PubMed

    de Soure, António M; Fernandes-Platzgummer, Ana; da Silva, Cláudia L; Cabral, Joaquim M S

    2016-10-20

    Due to their unique features, mesenchymal stem/stromal cells (MSC) have been exploited in clinical settings as therapeutic candidates for the treatment of a variety of diseases. However, the success in obtaining clinically-relevant MSC numbers for cell-based therapies is dependent on efficient isolation and ex vivo expansion protocols, able to comply with good manufacturing practices (GMP). In this context, the 2-dimensional static culture systems typically used for the expansion of these cells present several limitations that may lead to reduced cell numbers and compromise cell functions. Furthermore, many studies in the literature report the expansion of MSC using fetal bovine serum (FBS)-supplemented medium, which has been critically rated by regulatory agencies. Alternative platforms for the scalable manufacturing of MSC have been developed, namely using microcarriers in bioreactors, with also a considerable number of studies now reporting the production of MSC using xenogeneic/serum-free medium formulations. In this review we provide a comprehensive overview on the scalable manufacturing of human mesenchymal stem/stromal cells, depicting the various steps involved in the process from cell isolation to ex vivo expansion, using different cell tissue sources and culture medium formulations and exploiting bioprocess engineering tools namely microcarrier technology and bioreactors. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Low-Noise Transimpedance Amplifier for BLM-Based Ion Channel Recording.

    PubMed

    Crescentini, Marco; Bennati, Marco; Saha, Shimul Chandra; Ivica, Josip; de Planque, Maurits; Morgan, Hywel; Tartagni, Marco

    2016-05-19

    High-throughput screening (HTS) using ion channel recording is a powerful drug discovery technique in pharmacology. Ion channel recording with planar bilayer lipid membranes (BLM) is scalable and has very high sensitivity. A HTS system based on BLM ion channel recording faces three main challenges: (i) design of scalable microfluidic devices; (ii) design of compact ultra-low-noise transimpedance amplifiers able to detect currents in the pA range with bandwidth >10 kHz; (iii) design of compact, robust and scalable systems that integrate these two elements. This paper presents a low-noise transimpedance amplifier with integrated A/D conversion realized in CMOS 0.35 μm technology. The CMOS amplifier acquires currents in the range ±200 pA and ±20 nA, with 100 kHz bandwidth while dissipating 41 mW. An integrated digital offset compensation loop balances any voltage offsets from Ag/AgCl electrodes. The measured open-input input-referred noise current is as low as 4 fA/√Hz at ±200 pA range. The current amplifier is embedded in an integrated platform, together with a microfluidic device, for current recording from ion channels. Gramicidin-A, α-haemolysin and KcsA potassium channels have been used to prove both the platform and the current-to-digital converter.

  19. A Low-Noise Transimpedance Amplifier for BLM-Based Ion Channel Recording

    PubMed Central

    Crescentini, Marco; Bennati, Marco; Saha, Shimul Chandra; Ivica, Josip; de Planque, Maurits; Morgan, Hywel; Tartagni, Marco

    2016-01-01

    High-throughput screening (HTS) using ion channel recording is a powerful drug discovery technique in pharmacology. Ion channel recording with planar bilayer lipid membranes (BLM) is scalable and has very high sensitivity. A HTS system based on BLM ion channel recording faces three main challenges: (i) design of scalable microfluidic devices; (ii) design of compact ultra-low-noise transimpedance amplifiers able to detect currents in the pA range with bandwidth >10 kHz; (iii) design of compact, robust and scalable systems that integrate these two elements. This paper presents a low-noise transimpedance amplifier with integrated A/D conversion realized in CMOS 0.35 μm technology. The CMOS amplifier acquires currents in the range ±200 pA and ±20 nA, with 100 kHz bandwidth while dissipating 41 mW. An integrated digital offset compensation loop balances any voltage offsets from Ag/AgCl electrodes. The measured open-input input-referred noise current is as low as 4 fA/√Hz at ±200 pA range. The current amplifier is embedded in an integrated platform, together with a microfluidic device, for current recording from ion channels. Gramicidin-A, α-haemolysin and KcsA potassium channels have been used to prove both the platform and the current-to-digital converter. PMID:27213382

  20. Gene Delivery into Plant Cells for Recombinant Protein Production

    PubMed Central

    Chen, Qiang

    2015-01-01

    Recombinant proteins are primarily produced from cultures of mammalian, insect, and bacteria cells. In recent years, the development of deconstructed virus-based vectors has allowed plants to become a viable platform for recombinant protein production, with advantages in versatility, speed, cost, scalability, and safety over the current production paradigms. In this paper, we review the recent progress in the methodology of agroinfiltration, a solution to overcome the challenge of transgene delivery into plant cells for large-scale manufacturing of recombinant proteins. General gene delivery methodologies in plants are first summarized, followed by extensive discussion on the application and scalability of each agroinfiltration method. New development of a spray-based agroinfiltration and its application on field-grown plants is highlighted. The discussion of agroinfiltration vectors focuses on their applications for producing complex and heteromultimeric proteins and is updated with the development of bridge vectors. Progress on agroinfiltration in Nicotiana and non-Nicotiana plant hosts is subsequently showcased in context of their applications for producing high-value human biologics and low-cost and high-volume industrial enzymes. These new advancements in agroinfiltration greatly enhance the robustness and scalability of transgene delivery in plants, facilitating the adoption of plant transient expression systems for manufacturing recombinant proteins with a broad range of applications. PMID:26075275

  1. Scalable and sustainable synthesis of carbon microspheres via a purification-free strategy for sodium-ion capacitors

    NASA Astrophysics Data System (ADS)

    Wang, Shijie; Wang, Rutao; Zhang, Yabin; Jin, Dongdong; Zhang, Li

    2018-03-01

    Sodium-based energy storage receives a great deal of interest due to the virtually inexhaustible sodium reserve, while the scalable and sustainable strategies to synthesize carbon-based materials with suitable interlayer spaces and large sodium storage capacities are yet to be fully investigated. Carbon microspheres, with regular geometry, non-graphitic characteristic, and stable nature are promising candidates, yet the synthetic methods are usually complex and energy consuming. In this regard, we report a scalable purification-free strategy to synthesize carbon microspheres directly from 5 species of fresh juice. As-synthesized carbon microspheres exhibit dilated interlayer distance of 0.375 nm and facilitate Na+ uptake and release. For example, such carbon microsphere anodes have a specific capacity of 183.9 mAh g-1 at 50 mA g-1 and exhibit ultra-stability (99.0% capacity retention) after 10000 cycles. Moreover, via facile activation, highly porous carbon microsphere cathodes are fabricated and show much higher energy density at high rate than commercial activated carbon. Coupling the compelling anodes and cathodes above, novel sodium-ion capacitors show the high working potential up to 4.0 V, deliver a maximum energy density of 52.2 Wh kg-1, and exhibit an acceptable capacity retention of 85.7% after 2000 cycles.

  2. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  3. Quality Matters™: An Educational Input in an Ongoing Design-Based Research Project

    ERIC Educational Resources Information Center

    Adair, Deborah; Shattuck, Kay

    2015-01-01

    Quality Matters (QM) has been transforming established best practices and online education-based research into an applicable, scalable course level improvement process for the last decade. In this article, the authors describe QM as an ongoing design-based research project and an educational input for improving online education.

  4. An investigation into scalability and compliance for triple patterning with stitches for metal 1 at the 14nm node

    NASA Astrophysics Data System (ADS)

    Cork, Christopher; Miloslavsky, Alexander; Friedberg, Paul; Luk-Pat, Gerry

    2013-04-01

    Lithographers had hoped that single patterning would be enabled at the 20nm node by way of EUV lithography. However, due to delays in EUV readiness, double patterning with 193i lithography is currently relied upon for volume production for the 20nm node's metal 1 layer. At the 14nm and likely at the 10nm node, LE-LE-LE triple patterning technology (TPT) is one of the favored options [1,2] for patterning local interconnect and Metal 1 layers. While previous research has focused on TPT for contact mask, metal layers offer new challenges and opportunities, in particular the ability to decompose design polygons across more than one mask. The extra flexibility offered by the third mask and ability to leverage polygon stitching both serve to improve compliance. However, ensuring TPT compliance - the task of finding a 3-color mask decomposition for a design - is still a difficult task. Moreover, scalability concerns multiply the difficulty of triple patterning decomposition which is an NP-complete problem. Indeed previous work shows that network sizes above a few thousand nodes or polygons start to take significantly longer times to compute [3], making full chip decomposition for arbitrary layouts impractical. In practice Metal 1 layouts can be considered as two separate problem domains, namely: decomposition of standard cells and decomposition of IP blocks. Standard cells typically include only a few 10's of polygons and should be amenable to fast decomposition. Successive design iterations should resolve compliance issues and improve packing density. Density improvements are multiplied repeatedly as standard cells are placed multiple times. IP blocks, on the other hand, may involve very large networks. This paper evaluates multiple approaches to triple patterning decomposition for the Metal 1 layer. The benefits of polygon stitching, in particular, the ability to resolve commonly encountered non-compliant layout configurations and improve packing density, are weighed against the increased difficulty in finding an optimized, legal decomposition and coping with the increased scalability challenges.

  5. Visualization for genomics: the Microbial Genome Viewer.

    PubMed

    Kerkhoven, Robert; van Enckevort, Frank H J; Boekhorst, Jos; Molenaar, Douwe; Siezen, Roland J

    2004-07-22

    A Web-based visualization tool, the Microbial Genome Viewer, is presented that allows the user to combine complex genomic data in a highly interactive way. This Web tool enables the interactive generation of chromosome wheels and linear genome maps from genome annotation data stored in a MySQL database. The generated images are in scalable vector graphics (SVG) format, which is suitable for creating high-quality scalable images and dynamic Web representations. Gene-related data such as transcriptome and time-course microarray experiments can be superimposed on the maps for visual inspection. The Microbial Genome Viewer 1.0 is freely available at http://www.cmbi.kun.nl/MGV

  6. Cavity-Mediated Coherent Coupling between Distant Quantum Dots

    NASA Astrophysics Data System (ADS)

    Nicolí, Giorgio; Ferguson, Michael Sven; Rössler, Clemens; Wolfertz, Alexander; Blatter, Gianni; Ihn, Thomas; Ensslin, Klaus; Reichl, Christian; Wegscheider, Werner; Zilberberg, Oded

    2018-06-01

    Scalable architectures for quantum information technologies require one to selectively couple long-distance qubits while suppressing environmental noise and cross talk. In semiconductor materials, the coherent coupling of a single spin on a quantum dot to a cavity hosting fermionic modes offers a new solution to this technological challenge. Here, we demonstrate coherent coupling between two spatially separated quantum dots using an electronic cavity design that takes advantage of whispering-gallery modes in a two-dimensional electron gas. The cavity-mediated, long-distance coupling effectively minimizes undesirable direct cross talk between the dots and defines a scalable architecture for all-electronic semiconductor-based quantum information processing.

  7. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-03-09

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  8. The novel high-performance 3-D MT inverse solver

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey

    2016-04-01

    We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.

  9. OligoIS: Scalable Instance Selection for Class-Imbalanced Data Sets.

    PubMed

    García-Pedrajas, Nicolás; Perez-Rodríguez, Javier; de Haro-García, Aida

    2013-02-01

    In current research, an enormous amount of information is constantly being produced, which poses a challenge for data mining algorithms. Many of the problems in extremely active research areas, such as bioinformatics, security and intrusion detection, or text mining, share the following two features: large data sets and class-imbalanced distribution of samples. Although many methods have been proposed for dealing with class-imbalanced data sets, most of these methods are not scalable to the very large data sets common to those research fields. In this paper, we propose a new approach to dealing with the class-imbalance problem that is scalable to data sets with many millions of instances and hundreds of features. This proposal is based on the divide-and-conquer principle combined with application of the selection process to balanced subsets of the whole data set. This divide-and-conquer principle allows the execution of the algorithm in linear time. Furthermore, the proposed method is easy to implement using a parallel environment and can work without loading the whole data set into memory. Using 40 class-imbalanced medium-sized data sets, we will demonstrate our method's ability to improve the results of state-of-the-art instance selection methods for class-imbalanced data sets. Using three very large data sets, we will show the scalability of our proposal to millions of instances and hundreds of features.

  10. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    PubMed Central

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  11. [ANTHROPOMETRIC PROPORTIONALITY METHOD ELECTION IN A SPORT POPULATION; COMPARISON OF THREE METHODS].

    PubMed

    Almagià, Atilio; Araneda, Alberto; Sánchez, Javier; Sánchez, Patricio; Zúñiga, Maximiliano; Plaza, Paula

    2015-09-01

    the proportionality model application, based on ideal proportions, would have a great impact on high performance sports, due to best athletes to resemble anthropometrically. the objective of this study was to compare the following anthropometric methods of proportionality: Phantom, Combined and Scalable, in male champion university Chilean soccer players in 2012 and 2013, using South American professional soccer players as criterion, in order to find the most appropriate proportionality method to sports populations. the measerement of 22 kinanthropometric variables was performed, according to the ISAK protocol, to a sample constituted of 13 members of the men's soccer team of the Pontificia Universidad Católica de Valparaíso. The Z-values of the anthropometrics variables of each method were obtained using their respective equations. It was used as criterion population South American soccer players. a similar trend was observed between the three methods. Significant differences (p < 0.05) were found in some Z-values of Scalable and Combined methods compared to Phantom method. No significant differences were observed between the results obtained by the Combined and Scalable methods, except in wrist, thigh and hip perimeters. it is more appropriate to use the Scalable method over the Combined and Phantom methods for the comparison of Z values in kinanthropometric variables in athletes of the same discipline. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  12. Cloud-Based Virtual Laboratory for Network Security Education

    ERIC Educational Resources Information Center

    Xu, Le; Huang, Dijiang; Tsai, Wei-Tek

    2014-01-01

    Hands-on experiments are essential for computer network security education. Existing laboratory solutions usually require significant effort to build, configure, and maintain and often do not support reconfigurability, flexibility, and scalability. This paper presents a cloud-based virtual laboratory education platform called V-Lab that provides a…

  13. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  14. SeqHBase: a big data toolset for family based sequencing data analysis.

    PubMed

    He, Min; Person, Thomas N; Hebbring, Scott J; Heinzen, Ethan; Ye, Zhan; Schrodi, Steven J; McPherson, Elizabeth W; Lin, Simon M; Peissig, Peggy L; Brilliant, Murray H; O'Rawe, Jason; Robison, Reid J; Lyon, Gholson J; Wang, Kai

    2015-04-01

    Whole-genome sequencing (WGS) and whole-exome sequencing (WES) technologies are increasingly used to identify disease-contributing mutations in human genomic studies. It can be a significant challenge to process such data, especially when a large family or cohort is sequenced. Our objective was to develop a big data toolset to efficiently manipulate genome-wide variants, functional annotations and coverage, together with conducting family based sequencing data analysis. Hadoop is a framework for reliable, scalable, distributed processing of large data sets using MapReduce programming models. Based on Hadoop and HBase, we developed SeqHBase, a big data-based toolset for analysing family based sequencing data to detect de novo, inherited homozygous, or compound heterozygous mutations that may contribute to disease manifestations. SeqHBase takes as input BAM files (for coverage at every site), variant call format (VCF) files (for variant calls) and functional annotations (for variant prioritisation). We applied SeqHBase to a 5-member nuclear family and a 10-member 3-generation family with WGS data, as well as a 4-member nuclear family with WES data. Analysis times were almost linearly scalable with number of data nodes. With 20 data nodes, SeqHBase took about 5 secs to analyse WES familial data and approximately 1 min to analyse WGS familial data. These results demonstrate SeqHBase's high efficiency and scalability, which is necessary as WGS and WES are rapidly becoming standard methods to study the genetics of familial disorders. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Scalable alignment and transfer of nanowires based on oriented polymer nanofibers.

    PubMed

    Yan, Shancheng; Lu, Lanxin; Meng, Hao; Huang, Ningping; Xiao, Zhongdang

    2010-03-05

    We develop a simple and scalable method based on oriented polymer nanofiber films for the parallel assembly and transfer of nanowires at high density. Nanowires dispersed in solution are aligned and selectively deposited at the central space of parallel nanochannels formed by the well-oriented nanofibers as a result of evaporation-induced flow and capillarity. A general contact printing method is used to realize the transfer of the nanowires from the donor nanofiber film to a receiver substrate. The mechanism, which involves ordered alignment of nanowires on oriented polymer nanofiber films, is also explored with an evaporation model of cylindrical droplets. The simplicity of the assembly and transfer, and the facile fabrication of large-area well-oriented nanofiber films, make the present method promising for the application of nanowires, especially for the disordered nanowires synthesized by solution chemistry.

  16. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE PAGES

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  17. Working towards a scalable model of problem-based learning instruction in undergraduate engineering education

    NASA Astrophysics Data System (ADS)

    Mantri, Archana

    2014-05-01

    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and Communication Engineering, namely Analog Electronics, Digital Electronics and Pulse, Digital & Switching Circuits is presented here. It measures the effects of pedagogy, gender and cognitive styles on the knowledge, skill and attitude of the students. The study was conducted two times with content designed around same set of OEPs but with two different trained facilitators for all the three courses. The repeatability of results for effects of the independent parameters on dependent parameters is studied and inferences are drawn.

  18. A scalable multi-photon coincidence detector based on superconducting nanowires.

    PubMed

    Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K

    2018-06-04

    Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.

  19. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.

    PubMed

    Park, Jongkil; Yu, Theodore; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert

    2017-10-01

    We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×10 7 synaptic events per second per 16k-neuron node in the hierarchy.

  20. Generation of Light with Multimode Time-Delayed Entanglement Using Storage in a Solid-State Spin-Wave Quantum Memory.

    PubMed

    Ferguson, Kate R; Beavan, Sarah E; Longdell, Jevon J; Sellars, Matthew J

    2016-07-08

    Here, we demonstrate generating and storing entanglement in a solid-state spin-wave quantum memory with on-demand readout using the process of rephased amplified spontaneous emission (RASE). Amplified spontaneous emission (ASE), resulting from an inverted ensemble of Pr^{3+} ions doped into a Y_{2}SiO_{5} crystal, generates entanglement between collective states of the praseodymium ensemble and the output light. The ensemble is then rephased using a four-level photon echo technique. Entanglement between the ASE and its echo is confirmed and the inseparability violation preserved when the RASE is stored as a spin wave for up to 5  μs. RASE is shown to be temporally multimode with almost perfect distinguishability between two temporal modes demonstrated. These results pave the way for the use of multimode solid-state quantum memories in scalable quantum networks.

  1. Ionospheric Slant Total Electron Content Analysis Using Global Positioning System Based Estimation

    NASA Technical Reports Server (NTRS)

    Komjathy, Attila (Inventor); Mannucci, Anthony J. (Inventor); Sparks, Lawrence C. (Inventor)

    2017-01-01

    A method, system, apparatus, and computer program product provide the ability to analyze ionospheric slant total electron content (TEC) using global navigation satellite systems (GNSS)-based estimation. Slant TEC is estimated for a given set of raypath geometries by fitting historical GNSS data to a specified delay model. The accuracy of the specified delay model is estimated by computing delay estimate residuals and plotting a behavior of the delay estimate residuals. An ionospheric threat model is computed based on the specified delay model. Ionospheric grid delays (IGDs) and grid ionospheric vertical errors (GIVEs) are computed based on the ionospheric threat model.

  2. [Application of the life sciences platform based on oracle to biomedical informations].

    PubMed

    Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao

    2008-03-01

    The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.

  3. Designing a Syntax-Based Retrieval System for Supporting Language Learning

    ERIC Educational Resources Information Center

    Tsao, Nai-Lung; Kuo, Chin-Hwa; Wible, David; Hung, Tsung-Fu

    2009-01-01

    In this paper, we propose a syntax-based text retrieval system for on-line language learning and use a fast regular expression search engine as its main component. Regular expression searches provide more scalable querying and search results than keyword-based searches. However, without a well-designed index scheme, the execution time of regular…

  4. Discovering time-lagged rules from microarray data using gene profile classifiers

    PubMed Central

    2011-01-01

    Background Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes. Results This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations. Conclusions A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. PMID:21524308

  5. A slotted access control protocol for metropolitan WDM ring networks

    NASA Astrophysics Data System (ADS)

    Baziana, P. A.; Pountourakis, I. E.

    2009-03-01

    In this study we focus on the serious scalability problems that many access protocols for WDM ring networks introduce due to the use of a dedicated wavelength per access node for either transmission or reception. We propose an efficient slotted MAC protocol suitable for WDM ring metropolitan area networks. The proposed network architecture employs a separate wavelength for control information exchange prior to the data packet transmission. Each access node is equipped with a pair of tunable transceivers for data communication and a pair of fixed tuned transceivers for control information exchange. Also, each access node includes a set of fixed delay lines for synchronization reasons; to keep the data packets, while the control information is processed. An efficient access algorithm is applied to avoid both the data wavelengths and the receiver collisions. In our protocol, each access node is capable of transmitting and receiving over any of the data wavelengths, facing the scalability issues. Two different slot reuse schemes are assumed: the source and the destination stripping schemes. For both schemes, performance measures evaluation is provided via an analytic model. The analytical results are validated by a discrete event simulation model that uses Poisson traffic sources. Simulation results show that the proposed protocol manages efficient bandwidth utilization, especially under high load. Also, comparative simulation results prove that our protocol achieves significant performance improvement as compared with other WDMA protocols which restrict transmission over a dedicated data wavelength. Finally, performance measures evaluation is explored for diverse numbers of buffer size, access nodes and data wavelengths.

  6. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  7. Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining

    NASA Astrophysics Data System (ADS)

    Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio

    2013-12-01

    Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.

  8. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passingmore » Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.« less

  9. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  10. Scalable and expressive medical terminologies.

    PubMed

    Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J

    1996-01-01

    The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.

  11. Bio-inspired silicon nanospikes fabricated by metal-assisted chemical etching for antibacterial surfaces

    NASA Astrophysics Data System (ADS)

    Hu, Huan; Siu, Vince S.; Gifford, Stacey M.; Kim, Sungcheol; Lu, Minhua; Meyer, Pablo; Stolovitzky, Gustavo A.

    2017-12-01

    The recently discovered bactericidal properties of nanostructures on wings of insects such as cicadas and dragonflies have inspired the development of similar nanostructured surfaces for antibacterial applications. Since most antibacterial applications require nanostructures covering a considerable amount of area, a practical fabrication method needs to be cost-effective and scalable. However, most reported nanofabrication methods require either expensive equipment or a high temperature process, limiting cost efficiency and scalability. Here, we report a simple, fast, low-cost, and scalable antibacterial surface nanofabrication methodology. Our method is based on metal-assisted chemical etching that only requires etching a single crystal silicon substrate in a mixture of silver nitrate and hydrofluoric acid for several minutes. We experimentally studied the effects of etching time on the morphology of the silicon nanospikes and the bactericidal properties of the resulting surface. We discovered that 6 minutes of etching results in a surface containing silicon nanospikes with optimal geometry. The bactericidal properties of the silicon nanospikes were supported by bacterial plating results, fluorescence images, and scanning electron microscopy images.

  12. Scalable metagenomic taxonomy classification using a reference genome database

    PubMed Central

    Ames, Sasha K.; Hysom, David A.; Gardner, Shea N.; Lloyd, G. Scott; Gokhale, Maya B.; Allen, Jonathan E.

    2013-01-01

    Motivation: Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. Results: A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take <20 h on a single node 40 core large memory machine and provide new insights on the metagenomic contents of the sample. Availability: Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat Contact: allen99@llnl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23828782

  13. Three-dimensional Finite Element Formulation and Scalable Domain Decomposition for High Fidelity Rotor Dynamic Analysis

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Johnson, Wayne R.

    2009-01-01

    This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.

  14. Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.

    PubMed

    Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey

    2012-06-01

    Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.

  15. An integrated semiconductor device enabling non-optical genome sequencing.

    PubMed

    Rothberg, Jonathan M; Hinz, Wolfgang; Rearick, Todd M; Schultz, Jonathan; Mileski, William; Davey, Mel; Leamon, John H; Johnson, Kim; Milgrew, Mark J; Edwards, Matthew; Hoon, Jeremy; Simons, Jan F; Marran, David; Myers, Jason W; Davidson, John F; Branting, Annika; Nobile, John R; Puc, Bernard P; Light, David; Clark, Travis A; Huber, Martin; Branciforte, Jeffrey T; Stoner, Isaac B; Cawley, Simon E; Lyons, Michael; Fu, Yutao; Homer, Nils; Sedova, Marina; Miao, Xin; Reed, Brian; Sabina, Jeffrey; Feierstein, Erika; Schorn, Michelle; Alanjary, Mohammad; Dimalanta, Eileen; Dressman, Devin; Kasinskas, Rachel; Sokolsky, Tanya; Fidanza, Jacqueline A; Namsaraev, Eugeni; McKernan, Kevin J; Williams, Alan; Roth, G Thomas; Bustillo, James

    2011-07-20

    The seminal importance of DNA sequencing to the life sciences, biotechnology and medicine has driven the search for more scalable and lower-cost solutions. Here we describe a DNA sequencing technology in which scalable, low-cost semiconductor manufacturing techniques are used to make an integrated circuit able to directly perform non-optical DNA sequencing of genomes. Sequence data are obtained by directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides on this massively parallel semiconductor-sensing device or ion chip. The ion chip contains ion-sensitive, field-effect transistor-based sensors in perfect register with 1.2 million wells, which provide confinement and allow parallel, simultaneous detection of independent sequencing reactions. Use of the most widely used technology for constructing integrated circuits, the complementary metal-oxide semiconductor (CMOS) process, allows for low-cost, large-scale production and scaling of the device to higher densities and larger array sizes. We show the performance of the system by sequencing three bacterial genomes, its robustness and scalability by producing ion chips with up to 10 times as many sensors and sequencing a human genome.

  16. Nonblocking Clos networks of multiple ROADM rings for mega data centers.

    PubMed

    Zhao, Li; Ye, Tong; Hu, Weisheng

    2015-11-02

    Optical networks have been introduced to meet the bandwidth requirement of mega data centers (DC). Most existing approaches are neither scalable to face the massive growth of DCs, nor contention-free enough to provide full bisection bandwidth. To solve this problem, we propose two symmetric network structures: ring-MEMS-ring (RMR) network and MEMS-ring-MEMS (MRM) network based on classical Clos theory. New strategies are introduced to overcome the additional wavelength constraints that did not exist in the traditional Clos network. Two structures that followed the strategies can enable high scalability and nonblocking property simultaneously. The one-to-one correspondence of the RMR and MRM structures to a Clos is verified and the nonblocking conditions are given along with the routing algorithms. Compared to a typical folded-Clos network, both structures are more readily scalable to future mega data centers with 51200 racks while reducing number of long cables significantly. We show that the MRM network is more cost-effective than the RMR network, since the MRM network does not need tunable lasers to achieve nonblocking routing.

  17. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  18. Scalable and responsive event processing in the cloud

    PubMed Central

    Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul

    2013-01-01

    Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164

  19. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks

    DOE PAGES

    Shen, Yiwen; Hattink, Maarten; Samadi, Payman; ...

    2018-04-13

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less

  20. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Yiwen; Hattink, Maarten; Samadi, Payman

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less

  1. Broadband and fabrication-tolerant on-chip scalable mode-division multiplexing based on mode-evolution counter-tapered couplers.

    PubMed

    Wang, Jing; Xuan, Yi; Qi, Minghao; Huang, Haiyang; Li, You; Li, Ming; Chen, Xin; Sheng, Zhen; Wu, Aimin; Li, Wei; Wang, Xi; Zou, Shichang; Gan, Fuwan

    2015-05-01

    A broadband and fabrication-tolerant on-chip scalable mode-division multiplexing (MDM) scheme based on mode-evolution counter-tapered couplers is designed and experimentally demonstrated on a silicon-on-insulator (SOI) platform. Due to the broadband advantage offered by mode evolution, the two-mode MDM link exhibits a very large, -1  dB bandwidth of >180  nm, which is considerably larger than most of the previously reported MDM links whether they are based on mode-interference or evolution. In addition, the performance metrics remain stable for large-device width deviations from the designed valued by -60  nm to 40 nm, and for temperature variations from -25°C to 75°C. This MDM scheme can be readily extended to higher-order mode multiplexing and a three-mode MDM link is measured with less than -10  dB crosstalk from 1.46 to 1.64 μm wavelength range.

  2. Large-scale, Exhaustive Lattice-based Structural Auditing of SNOMED CT.

    PubMed

    Zhang, Guo-Qiang; Bodenreider, Olivier

    2010-11-13

    One criterion for the well-formedness of ontologies is that their hierarchical structure forms a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT (> 300k concepts). We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the over 544k non-lattice pairs, among over 356 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links.

  3. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  4. Large-scale, Exhaustive Lattice-based Structural Auditing of SNOMED CT

    PubMed Central

    Zhang, Guo-Qiang; Bodenreider, Olivier

    2010-01-01

    One criterion for the well-formedness of ontologies is that their hierarchical structure forms a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT (> 300k concepts). We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the over 544k non-lattice pairs, among over 356 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links. PMID:21347113

  5. A Simple, Scalable, Script-based Science Processor

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2004-01-01

    The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.

  6. Emerging Technologies for Assembly of Microscale Hydrogels

    PubMed Central

    Kavaz, Doga; Demirel, Melik C.; Demirci, Utkan

    2013-01-01

    Assembly of cell encapsulating building blocks (i.e., microscale hydrogels) has significant applications in areas including regenerative medicine, tissue engineering, and cell-based in vitro assays for pharmaceutical research and drug discovery. Inspired by the repeating functional units observed in native tissues and biological systems (e.g., the lobule in liver, the nephron in kidney), assembly technologies aim to generate complex tissue structures by organizing microscale building blocks. Novel assembly technologies enable fabrication of engineered tissue constructs with controlled properties including tunable microarchitectural and predefined compositional features. Recent advances in micro- and nano-scale technologies have enabled engineering of microgel based three dimensional (3D) constructs. There is a need for high-throughput and scalable methods to assemble microscale units with a complex 3D micro-architecture. Emerging assembly methods include novel technologies based on microfluidics, acoustic and magnetic fields, nanotextured surfaces, and surface tension. In this review, we survey emerging microscale hydrogel assembly methods offering rapid, scalable microgel assembly in 3D, and provide future perspectives and discuss potential applications. PMID:23184717

  7. Scalable synthesis of interconnected porous silicon/carbon composites by the Rochow reaction as high-performance anodes of lithium ion batteries.

    PubMed

    Zhang, Zailei; Wang, Yanhong; Ren, Wenfeng; Tan, Qiangqiang; Chen, Yunfa; Li, Hong; Zhong, Ziyi; Su, Fabing

    2014-05-12

    Despite the promising application of porous Si-based anodes in future Li ion batteries, the large-scale synthesis of these materials is still a great challenge. A scalable synthesis of porous Si materials is presented by the Rochow reaction, which is commonly used to produce organosilane monomers for synthesizing organosilane products in chemical industry. Commercial Si microparticles reacted with gas CH3 Cl over various Cu-based catalyst particles to substantially create macropores within the unreacted Si accompanying with carbon deposition to generate porous Si/C composites. Taking advantage of the interconnected porous structure and conductive carbon-coated layer after simple post treatment, these composites as anodes exhibit high reversible capacity and long cycle life. It is expected that by integrating the organosilane synthesis process and controlling reaction conditions, the manufacture of porous Si-based anodes on an industrial scale is highly possible. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Smartphone based scalable reverse engineering by digital image correlation

    NASA Astrophysics Data System (ADS)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  9. On gate stack scalability of double-gate negative-capacitance FET with ferroelectric HfO2 for energy efficient sub-0.2 V operation

    NASA Astrophysics Data System (ADS)

    Jang, Kyungmin; Saraya, Takuya; Kobayashi, Masaharu; Hiramoto, Toshiro

    2018-02-01

    We have investigated the gate stack scalability and energy efficiency of double-gate negative-capacitance FET (DGNCFET) with a CMOS-compatible ferroelectric HfO2 (FE:HfO2). Analytic model-based simulation is conducted to investigate the impacts of ferroelectric characteristic of FE:HfO2 and gate stack thickness on the I on/I off ratio of DGNCFET. DGNCFET has wider design window for the gate stack where higher I on/I off ratio can be achieved than DG classical MOSFET. Under a process-induced constraint with sub-10 nm gate length (L g), FE:HfO2-based DGNCFET still has a design point for high I on/I off ratio. With an optimized gate stack thickness for sub-10 nm L g, FE:HfO2-based DGNCFET has 2.5× higher energy efficiency than DG classical MOSFET even at ultralow operation voltage of sub-0.2 V.

  10. UpSetR: an R package for the visualization of intersecting sets and their properties.

    PubMed

    Conway, Jake R; Lex, Alexander; Gehlenborg, Nils

    2017-09-15

    Venn and Euler diagrams are a popular yet inadequate solution for quantitative visualization of set intersections. A scalable alternative to Venn and Euler diagrams for visualizing intersecting sets and their properties is needed. We developed UpSetR, an open source R package that employs a scalable matrix-based visualization to show intersections of sets, their size, and other properties. UpSetR is available at https://github.com/hms-dbmi/UpSetR/ and released under the MIT License. A Shiny app is available at https://gehlenborglab.shinyapps.io/upsetr/ . nils@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xantheas, Sotiris S.; Werhahn, Jasper C.

    Based on the formulation of the analytical expression of the potential V(r) describing intermolecular interactions in terms of the dimensionless variables r*=r/rm and !*=V/!, where rm is the separation at the minimum and ! the well depth, we propose more generalized scalable forms for the commonly used Lennard-Jones, Mie, Morse and Buckingham exponential-6 potential energy functions (PEFs). These new generalized forms have an additional parameter from and revert to the original ones for some choice of that parameter. In this respect, the original forms can be considered as special cases of the more general forms that are introduced. We alsomore » propose a scalable, but nonrevertible to the original one, 4-parameter extended Morse potential.« less

  12. A graphics to scalable vector graphics adaptation framework for progressive remote line rendering on mobile devices

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang

    2007-12-01

    In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.

  13. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing

    PubMed Central

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R.; Han, InTaek; Yun, Dong-Jin

    2015-01-01

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements. PMID:26490133

  14. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing.

    PubMed

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R; Han, InTaek; Yun, Dong-Jin

    2015-10-22

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements.

  15. Electrohydrodynamic printing for scalable MoS2 flake coating: application to gas sensing device

    NASA Astrophysics Data System (ADS)

    Lim, Sooman; Cho, Byungjin; Bae, Jaehyun; Kim, Ah Ra; Lee, Kyu Hwan; Kim, Se Hyun; Hahm, Myung Gwan; Nam, Jaewook

    2016-10-01

    Scalable sub-micrometer molybdenum disulfide ({{MoS}}2) flake films with highly uniform coverage were created using a systematic approach. An electrohydrodynamic (EHD) printing process realized a remarkably uniform distribution of exfoliated {{MoS}}2 flakes on desired substrates. In combination with a fast evaporating dispersion medium and an optimal choice of operating parameters, the EHD printing can produce a film rapidly on a substrate without excessive agglomeration or cluster formation, which can be problems in previously reported liquid-based continuous film methods. The printing of exfoliated {{MoS}}2 flakes enabled the fabrication of a gas sensor with high performance and reproducibility for {{NO}}2 and {{NH}}3.

  16. Scalable graphene aptasensors for drug quantification

    NASA Astrophysics Data System (ADS)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  17. Efficient scalable solid-state neutron detector.

    PubMed

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a (6)Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m(2), is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  18. Scalable boson sampling with time-bin encoding using a loop-based architecture.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P

    2014-09-19

    We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

  19. Temporally Scalable Visual SLAM using a Reduced Pose Graph

    DTIC Science & Technology

    2012-05-25

    m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use

  20. Water treatment: A scalable graphene-based membrane

    DOE PAGES

    Vlassiouk, Ivan V.

    2017-08-28

    We discuss how an improved industrial manufacturability has been achieved for a hybrid water-treatment membrane that exhibits high water permeance, prolonged high salt and dye rejection under cross-flow conditions and better resistance to chlorine treatment.

  1. Water treatment: A scalable graphene-based membrane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlassiouk, Ivan V.

    We discuss how an improved industrial manufacturability has been achieved for a hybrid water-treatment membrane that exhibits high water permeance, prolonged high salt and dye rejection under cross-flow conditions and better resistance to chlorine treatment.

  2. HiDi: an efficient reverse engineering schema for large-scale dynamic regulatory network reconstruction using adaptive differentiation.

    PubMed

    Deng, Yue; Zenil, Hector; Tegnér, Jesper; Kiani, Narsis A

    2017-12-15

    The use of differential equations (ODE) is one of the most promising approaches to network inference. The success of ODE-based approaches has, however, been limited, due to the difficulty in estimating parameters and by their lack of scalability. Here, we introduce a novel method and pipeline to reverse engineer gene regulatory networks from gene expression of time series and perturbation data based upon an improvement on the calculation scheme of the derivatives and a pre-filtration step to reduce the number of possible links. The method introduces a linear differential equation model with adaptive numerical differentiation that is scalable to extremely large regulatory networks. We demonstrate the ability of this method to outperform current state-of-the-art methods applied to experimental and synthetic data using test data from the DREAM4 and DREAM5 challenges. Our method displays greater accuracy and scalability. We benchmark the performance of the pipeline with respect to dataset size and levels of noise. We show that the computation time is linear over various network sizes. The Matlab code of the HiDi implementation is available at: www.complexitycalculator.com/HiDiScript.zip. hzenilc@gmail.com or narsis.kiani@ki.se. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2

    DOE PAGES

    Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.; ...

    2015-12-17

    The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost bymore » exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less

  5. On-line prediction of the glucose concentration of CHO cell cultivations by NIR and Raman spectroscopy: Comparative scalability test with a shake flask model system.

    PubMed

    Kozma, Bence; Hirsch, Edit; Gergely, Szilveszter; Párta, László; Pataki, Hajnalka; Salgó, András

    2017-10-25

    In this study, near-infrared (NIR) and Raman spectroscopy were compared in parallel to predict the glucose concentration of Chinese hamster ovary cell cultivations. A shake flask model system was used to quickly generate spectra similar to bioreactor cultivations therefore accelerating the development of a working model prior to actual cultivations. Automated variable selection and several pre-processing methods were tested iteratively during model development using spectra from six shake flask cultivations. The target was to achieve the lowest error of prediction for the glucose concentration in two independent shake flasks. The best model was then used to test the scalability of the two techniques by predicting spectra of a 10l and a 100l scale bioreactor cultivation. The NIR spectroscopy based model could follow the trend of the glucose concentration but it was not sufficiently accurate for bioreactor monitoring. On the other hand, the Raman spectroscopy based model predicted the concentration of glucose in both cultivation scales sufficiently accurately with an error around 4mM (0.72g/l), that is satisfactory for the on-line bioreactor monitoring purposes of the biopharma industry. Therefore, the shake flask model system was proven to be suitable for scalable spectroscopic model development. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Tai, Ann T.

    2000-01-01

    The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.

  7. A fully electric field driven scalable magnetoelectric switching element

    NASA Astrophysics Data System (ADS)

    Ahmed, R.; Victora, R. H.

    2018-04-01

    A technique for micromagnetic simulation of the magnetoelectric (ME) effect in Cr2O3 based structures has been developed. It has been observed that the microscopic ME susceptibility differs significantly from the experimentally measured values. The deviation between the two susceptibilities becomes more prominent near the Curie temperature, affecting the operation of the device at room temperature. A fully electric field controlled ME switching element has been proposed for use at technologically interesting densities: it employs quantum mechanical exchange at the boundaries instead of the applied magnetic field needed in traditional switching schemes. After establishing temperature dependent physics-based parameters, switching performances have been studied for different temperatures, applied electric fields, and Cr2O3 cross-sections. It has been found that our proposed use of quantum mechanical exchange favors reduced electric field operation and enhanced scalability while retaining reliable thermal stability.

  8. Distributed Cognition and Process Management Enabling Individualized Translational Research: The NIH Undiagnosed Diseases Program Experience

    PubMed Central

    Links, Amanda E.; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A.; Mungall, Christopher J.; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M.; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F.; Gahl, William A.; Sincan, Murat

    2016-01-01

    The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement. PMID:27785453

  9. Metal-free and Scalable Synthesis of Porous Hyper-cross-linked Polymers: Towards Applications in Liquid-Phase Adsorption.

    PubMed

    Schute, Kai; Rose, Marcus

    2015-10-26

    A metal-free route for the synthesis of hyper-cross-linked polymers (HCP) based on Brønsted acids such as trifluoromethanesulfonic acid as well as H2 SO4 is reported. It is an improved method compared to conventional synthesis strategies that use stoichiometric amounts of metal-based Lewis acids such as FeCl3 . The resulting high-performance adsorbents exhibit a permanent porosity with high specific surface areas up to 1842 m(2)  g(-1) . Easy scalability of the HCP synthesis is proven on the multi-gram scale. All chemo-physical properties are preserved. Water-vapor adsorption shows that the resulting materials exhibit an even more pronounced hydrophobicity compared to the conventionally prepared materials. The reduced surface polarity enhances the selectivity in the liquid-phase adsorption of the biogenic platform chemical 5-hydroxymethylfurfural. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. The BioIntelligence Framework: a new computational platform for biomedical knowledge computing.

    PubMed

    Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles; Mousses, Spyro

    2013-01-01

    Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information.

  11. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  12. k-merSNP discovery: Software for alignment-and reference-free scalable SNP discovery, phylogenetics, and annotation for hundreds of microbial genomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    With the flood of whole genome finished and draft microbial sequences, we need faster, more scalable bioinformatics tools for sequence comparison. An algorithm is described to find single nucleotide polymorphisms (SNPs) in whole genome data. It scales to hundreds of bacterial or viral genomes, and can be used for finished and/or draft genomes available as unassembled contigs or raw, unassembled reads. The method is fast to compute, finding SNPs and building a SNP phylogeny in minutes to hours, depending on the size and diversity of the input sequences. The SNP-based trees that result are consistent with known taxonomy and treesmore » determined in other studies. The approach we describe can handle many gigabases of sequence in a single run. The algorithm is based on k-mer analysis.« less

  13. Recent advancements towards green optical networks

    NASA Astrophysics Data System (ADS)

    Davidson, Alan; Glesk, Ivan; Buis, Adrianus; Wang, Junjia; Chen, Lawrence

    2014-12-01

    Recent years have seen a rapid growth in demand for ultra high speed data transmission with end users expecting fast, high bandwidth network access. With this rapid growth in demand, data centres are under pressure to provide ever increasing data rates through their networks and at the same time improve the quality of data handling in terms of reduced latency, increased scalability and improved channel speed for users. However as data rates increase, present technology based on well-established CMOS technology is becoming increasingly difficult to scale and consequently data networks are struggling to satisfy current network demand. In this paper the interrelated issues of electronic scalability, power consumption, limited copper interconnect bandwidth and the limited speed of CMOS electronics will be explored alongside the tremendous bandwidth potential of optical fibre based photonic networks. Some applications of photonics to help alleviate the speed and latency in data networks will be discussed.

  14. A reference architecture for integrated EHR in Colombia.

    PubMed

    de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd

    2011-01-01

    The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.

  15. A Distributed Amplifier System for Bilayer Lipid Membrane (BLM) Arrays With Noise and Individual Offset Cancellation.

    PubMed

    Crescentini, Marco; Thei, Frederico; Bennati, Marco; Saha, Shimul; de Planque, Maurits R R; Morgan, Hywel; Tartagni, Marco

    2015-06-01

    Lipid bilayer membrane (BLM) arrays are required for high throughput analysis, for example drug screening or advanced DNA sequencing. Complex microfluidic devices are being developed but these are restricted in terms of array size and structure or have integrated electronic sensing with limited noise performance. We present a compact and scalable multichannel electrophysiology platform based on a hybrid approach that combines integrated state-of-the-art microelectronics with low-cost disposable fluidics providing a platform for high-quality parallel single ion channel recording. Specifically, we have developed a new integrated circuit amplifier based on a novel noise cancellation scheme that eliminates flicker noise derived from devices under test and amplifiers. The system is demonstrated through the simultaneous recording of ion channel activity from eight bilayer membranes. The platform is scalable and could be extended to much larger array sizes, limited only by electronic data decimation and communication capabilities.

  16. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    NASA Astrophysics Data System (ADS)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  17. A Cloud-based Approach to Medical NLP

    PubMed Central

    Chard, Kyle; Russell, Michael; Lussier, Yves A.; Mendonça, Eneida A; Silverstein, Jonathan C.

    2011-01-01

    Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN. PMID:22195072

  18. A cloud-based approach to medical NLP.

    PubMed

    Chard, Kyle; Russell, Michael; Lussier, Yves A; Mendonça, Eneida A; Silverstein, Jonathan C

    2011-01-01

    Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN.

  19. Could mycobacterial Hsp70-containing fusion protein lead the way to an affordable therapeutic cancer vaccine?

    PubMed

    Brauns, Timothy; Leblanc, Pierre; Gelfand, Jeffrey A; Poznanski, Mark

    2015-03-01

    Cancer vaccine development efforts have recently gained momentum, but most vaccines showing clinical impact in human trials tend to be based on technology approaches that are very costly and difficult to produce at scale. With the projected doubling of the incidence of cancer and its related cost of care in the U.S. over the next two decades, the widespread clinical use of such vaccines will prove difficult to justify. Heat shock protein-based vaccines have shown the potential to elicit clinically meaningful immunologic responses in cancer, but the predominant development approach - heat shock protein-peptide complexes derived from a patient's own tumor - face similar challenges of cost and scalability. New innovative modalities for deploying heat shock proteins in cancer vaccines may open the door to vaccines that can generate potent cytotoxic responses against multiple tumor targets and can be made in a cost-effective and scalable manner.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  1. BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.

    PubMed

    Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung

    2016-05-01

    Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.

  2. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology

    PubMed Central

    Fu, Tian-Ming; Hong, Guosong; Viveros, Robert D.; Zhou, Tao

    2017-01-01

    Implantable electrical probes have led to advances in neuroscience, brain−machine interfaces, and treatment of neurological diseases, yet they remain limited in several key aspects. Ideally, an electrical probe should be capable of recording from large numbers of neurons across multiple local circuits and, importantly, allow stable tracking of the evolution of these neurons over the entire course of study. Silicon probes based on microfabrication can yield large-scale, high-density recording but face challenges of chronic gliosis and instability due to mechanical and structural mismatch with the brain. Ultraflexible mesh electronics, on the other hand, have demonstrated negligible chronic immune response and stable long-term brain monitoring at single-neuron level, although, to date, it has been limited to 16 channels. Here, we present a scalable scheme for highly multiplexed mesh electronics probes to bridge the gap between scalability and flexibility, where 32 to 128 channels per probe were implemented while the crucial brain-like structure and mechanics were maintained. Combining this mesh design with multisite injection, we demonstrate stable 128-channel local field potential and single-unit recordings from multiple brain regions in awake restrained mice over 4 mo. In addition, the newly integrated mesh is used to validate stable chronic recordings in freely behaving mice. This scalable scheme for mesh electronics together with demonstrated long-term stability represent important progress toward the realization of ideal implantable electrical probes allowing for mapping and tracking single-neuron level circuit changes associated with learning, aging, and neurodegenerative diseases. PMID:29109247

  3. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  4. Screen printing as a scalable and low-cost approach for rigid and flexible thin-film transistors using separated carbon nanotubes.

    PubMed

    Cao, Xuan; Chen, Haitian; Gu, Xiaofei; Liu, Bilu; Wang, Wenli; Cao, Yu; Wu, Fanqi; Zhou, Chongwu

    2014-12-23

    Semiconducting single-wall carbon nanotubes are very promising materials in printed electronics due to their excellent mechanical and electrical property, outstanding printability, and great potential for flexible electronics. Nonetheless, developing scalable and low-cost approaches for manufacturing fully printed high-performance single-wall carbon nanotube thin-film transistors remains a major challenge. Here we report that screen printing, which is a simple, scalable, and cost-effective technique, can be used to produce both rigid and flexible thin-film transistors using separated single-wall carbon nanotubes. Our fully printed top-gated nanotube thin-film transistors on rigid and flexible substrates exhibit decent performance, with mobility up to 7.67 cm2 V(-1) s(-1), on/off ratio of 10(4)∼10(5), minimal hysteresis, and low operation voltage (<10 V). In addition, outstanding mechanical flexibility of printed nanotube thin-film transistors (bent with radius of curvature down to 3 mm) and driving capability for organic light-emitting diode have been demonstrated. Given the high performance of the fully screen-printed single-wall carbon nanotube thin-film transistors, we believe screen printing stands as a low-cost, scalable, and reliable approach to manufacture high-performance nanotube thin-film transistors for application in display electronics. Moreover, this technique may be used to fabricate thin-film transistors based on other materials for large-area flexible macroelectronics, and low-cost display electronics.

  5. Towards scalable quantum communication and computation: Novel approaches and realizations

    NASA Astrophysics Data System (ADS)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as candidates for naturally error-free quantum computation. We propose a scheme to unambiguously detect the anyonic statistics in spin lattice realizations using ultra-cold atoms in an optical lattice. We show how to reliably read and write topologically protected quantum memory using an atomic or photonic qubit.

  6. Scalable Production of Graphene-Based Wearable E-Textiles.

    PubMed

    Karim, Nazmul; Afroj, Shaila; Tan, Sirui; He, Pei; Fernando, Anura; Carr, Chris; Novoselov, Kostya S

    2017-12-26

    Graphene-based wearable e-textiles are considered to be promising due to their advantages over traditional metal-based technology. However, the manufacturing process is complex and currently not suitable for industrial scale application. Here we report a simple, scalable, and cost-effective method of producing graphene-based wearable e-textiles through the chemical reduction of graphene oxide (GO) to make stable reduced graphene oxide (rGO) dispersion which can then be applied to the textile fabric using a simple pad-dry technique. This application method allows the potential manufacture of conductive graphene e-textiles at commercial production rates of ∼150 m/min. The graphene e-textile materials produced are durable and washable with acceptable softness/hand feel. The rGO coating enhanced the tensile strength of cotton fabric and also the flexibility due to the increase in strain% at maximum load. We demonstrate the potential application of these graphene e-textiles for wearable electronics with activity monitoring sensor. This could potentially lead to a multifunctional single graphene e-textile garment that can act both as sensors and flexible heating elements powered by the energy stored in graphene textile supercapacitors.

  7. Large-Scale, Exhaustive Lattice-Based Structural Auditing of SNOMED CT

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Qiang

    One criterion for the well-formedness of ontologies is that their hierarchical structure form a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT. We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the 518K non-lattice pairs, among over 34 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA computational pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links. This work is based on collaboration with Olivier Bodenreider from the National Library of Medicine, Bethesda, USA.

  8. FPGA implementation of a configurable neuromorphic CPG-based locomotion controller.

    PubMed

    Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar

    2013-09-01

    Neuromorphic engineering is a discipline devoted to the design and development of computational hardware that mimics the characteristics and capabilities of neuro-biological systems. In recent years, neuromorphic hardware systems have been implemented using a hybrid approach incorporating digital hardware so as to provide flexibility and scalability at the cost of power efficiency and some biological realism. This paper proposes an FPGA-based neuromorphic-like embedded system on a chip to generate locomotion patterns of periodic rhythmic movements inspired by Central Pattern Generators (CPGs). The proposed implementation follows a top-down approach where modularity and hierarchy are two desirable features. The locomotion controller is based on CPG models to produce rhythmic locomotion patterns or gaits for legged robots such as quadrupeds and hexapods. The architecture is configurable and scalable for robots with either different morphologies or different degrees of freedom (DOFs). Experiments performed on a real robot are presented and discussed. The obtained results demonstrate that the CPG-based controller provides the necessary flexibility to generate different rhythmic patterns at run-time suitable for adaptable locomotion. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Knowledge base and sensor bus messaging service architecture for critical tsunami warning and decision-support

    NASA Astrophysics Data System (ADS)

    Sabeur, Z. A.; Wächter, J.; Middleton, S. E.; Zlatev, Z.; Häner, R.; Hammitzsch, M.; Loewe, P.

    2012-04-01

    The intelligent management of large volumes of environmental monitoring data for early tsunami warning requires the deployment of robust and scalable service oriented infrastructure that is supported by an agile knowledge-base for critical decision-support In the TRIDEC project (TRIDEC 2010-2013), a sensor observation service bus of the TRIDEC system is being developed for the advancement of complex tsunami event processing and management. Further, a dedicated TRIDEC system knowledge-base is being implemented to enable on-demand access to semantically rich OGC SWE compliant hydrodynamic observations and operationally oriented meta-information to multiple subscribers. TRIDEC decision support requires a scalable and agile real-time processing architecture which enables fast response to evolving subscribers requirements as the tsunami crisis develops. This is also achieved with the support of intelligent processing services which specialise in multi-level fusion methods with relevance feedback and deep learning. The TRIDEC knowledge base development work coupled with that of the generic sensor bus platform shall be presented to demonstrate advanced decision-support with situation awareness in context of tsunami early warning and crisis management.

  10. Altering impulsive decision making with an acceptance-based procedure.

    PubMed

    Morrison, Kate L; Madden, Gregory J; Odum, Amy L; Friedel, Jonathan E; Twohig, Michael P

    2014-09-01

    Delay discounting is one facet of impulsive decision making and involves subjectively devaluing a delayed outcome. Steeply discounting delayed rewards is correlated with substance abuse and other problematic behaviors. To the extent that steep delay discounting underlies these clinical disorders, it would be advantageous to find psychosocial avenues for reducing delay discounting. Acceptance-based interventions may prove useful as they may help to decrease the distress that arises while waiting for a delayed outcome. The current study was conducted to determine if a 60-90 minute acceptance-based training would change delay discounting rates among 30 undergraduate university students in comparison to a waitlist control. Measures given at pre- and posttraining included a hypothetical monetary delay discounting task, the Acceptance and Action Questionnaire-II (AAQ-II), and the Distress Tolerance Scale. Those assigned to the treatment group decreased their discounting of delayed money, but not distress intolerance or psychological inflexibility when compared to the waitlist control group. After the waiting period, the control group received the intervention. Combining all participants' pre- to posttreatment data, the acceptance-based treatment significantly decreased discounting of monetary rewards and increased distress tolerance. The difference in AAQ-II approached significance. Acceptance-based treatments may be a worthwhile option for decreasing delay discounting rates and, consequently, affecting the choices that underlie addiction and other problematic behaviors. Copyright © 2014. Published by Elsevier Ltd.

  11. Nanoscale thermocapillarity enabled purification for horizontally aligned arrays of single walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Jin, Sung Hun; Dunham, Simon; Xie, Xu; Rogers, John A.

    2015-09-01

    Among the remarkable variety of semiconducting nanomaterials that have been discovered over the past two decades, single-walled carbon nanotubes remain uniquely well suited for applications in high-performance electronics, sensors and other technologies. The most advanced opportunities demand the ability to form perfectly aligned, horizontal arrays of purely semiconducting, chemically pristine carbon nanotubes. Here, we present strategies that offer this capability. Nanoscale thermos-capillary flows in thin-film organic coatings followed by reactive ion etching serve as highly efficient means for selectively removing metallic carbon nanotubes from electronically heterogeneous aligned arrays grown on quartz substrates. The low temperatures and unusual physics associated with this process enable robust, scalable operation, with clear potential for practical use. Especially for the purpose of selective joule heating over only metallic nanotubes, two representative platforms are proposed and confirmed. One is achieved by selective joule heating associated with thin film transistors with partial gate structure. The other is based on a simple, scalable, large-area scheme through microwave irradiation by using micro-strip dipole antennas of low work-function metals. In this study, based on purified semiconducting SWNTs, we demonstrated field effect transistors with mobility (> 1,000 cm2/Vsec) and on/off switching ratio (~10,000) with current outputs in the milliamp range. Furthermore, as one demonstration of the effectiveness over large area-scalability and simplicity, implementing the micro-wave based purification, on large arrays consisting of ~20,000 SWNTs completely removes all of the m-SWNTs (~7,000) to yield a purity of s-SWNTs that corresponds, quantitatively, to at least to 99.9925% and likely significantly higher.

  12. A Fully Automated Diabetes Prevention Program, Alive-PD: Program Design and Randomized Controlled Trial Protocol

    PubMed Central

    Azar, Kristen MJ; Block, Torin J; Romanelli, Robert J; Carpenter, Heather; Hopkins, Donald; Palaniappan, Latha; Block, Clifford H

    2015-01-01

    Background In the United States, 86 million adults have pre-diabetes. Evidence-based interventions that are both cost effective and widely scalable are needed to prevent diabetes. Objective Our goal was to develop a fully automated diabetes prevention program and determine its effectiveness in a randomized controlled trial. Methods Subjects with verified pre-diabetes were recruited to participate in a trial of the effectiveness of Alive-PD, a newly developed, 1-year, fully automated behavior change program delivered by email and Web. The program involves weekly tailored goal-setting, team-based and individual challenges, gamification, and other opportunities for interaction. An accompanying mobile phone app supports goal-setting and activity planning. For the trial, participants were randomized by computer algorithm to start the program immediately or after a 6-month delay. The primary outcome measures are change in HbA1c and fasting glucose from baseline to 6 months. The secondary outcome measures are change in HbA1c, glucose, lipids, body mass index (BMI), weight, waist circumference, and blood pressure at 3, 6, 9, and 12 months. Randomization and delivery of the intervention are independent of clinic staff, who are blinded to treatment assignment. Outcomes will be evaluated for the intention-to-treat and per-protocol populations. Results A total of 340 subjects with pre-diabetes were randomized to the intervention (n=164) or delayed-entry control group (n=176). Baseline characteristics were as follows: mean age 55 (SD 8.9); mean BMI 31.1 (SD 4.3); male 68.5%; mean fasting glucose 109.9 (SD 8.4) mg/dL; and mean HbA1c 5.6 (SD 0.3)%. Data collection and analysis are in progress. We hypothesize that participants in the intervention group will achieve statistically significant reductions in fasting glucose and HbA1c as compared to the control group at 6 months post baseline. Conclusions The randomized trial will provide rigorous evidence regarding the efficacy of this Web- and Internet-based program in reducing or preventing progression of glycemic markers and indirectly in preventing progression to diabetes. Trial Registration ClinicalTrials.gov NCT01479062; http://clinicaltrials.gov/show/NCT01479062 (Archived by WebCite at http://www.webcitation.org/6U8ODy1vo). PMID:25608692

  13. Scalability of Robotic Controllers: Speech-Based Robotic Controller Evaluation

    DTIC Science & Technology

    2009-06-01

    treated confidentially. Roster Number: __________________________ Date: _________________________ 1. Do you have any physical injury at the...any visual problems which you may have: Astigmatism (1) . 6. What is your height? 69 inches (mean) (range 60-74) 7. What is your

  14. Wearable Self-Charging Power Textile Based on Flexible Yarn Supercapacitors and Fabric Nanogenerators.

    PubMed

    Pu, Xiong; Li, Linxuan; Liu, Mengmeng; Jiang, Chunyan; Du, Chunhua; Zhao, Zhenfu; Hu, Weiguo; Wang, Zhong Lin

    2016-01-06

    A novel and scalable self-charging power textile is realized by combining yarn supercapacitors and fabric triboelectric nanogenerators as energy-harvesting devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Robustness of delayed multistable systems with application to droop-controlled inverter-based microgrids

    NASA Astrophysics Data System (ADS)

    Efimov, Denis; Schiffer, Johannes; Ortega, Romeo

    2016-05-01

    Motivated by the problem of phase-locking in droop-controlled inverter-based microgrids with delays, the recently developed theory of input-to-state stability (ISS) for multistable systems is extended to the case of multistable systems with delayed dynamics. Sufficient conditions for ISS of delayed systems are presented using Lyapunov-Razumikhin functions. It is shown that ISS multistable systems are robust with respect to delays in a feedback. The derived theory is applied to two examples. First, the ISS property is established for the model of a nonlinear pendulum and delay-dependent robustness conditions are derived. Second, it is shown that, under certain assumptions, the problem of phase-locking analysis in droop-controlled inverter-based microgrids with delays can be reduced to the stability investigation of the nonlinear pendulum. For this case, corresponding delay-dependent conditions for asymptotic phase-locking are given.

  16. Feedback and feedforward adaptation to visuomotor delay during reaching and slicing movements.

    PubMed

    Botzer, Lior; Karniel, Amir

    2013-07-01

    It has been suggested that the brain and in particular the cerebellum and motor cortex adapt to represent the environment during reaching movements under various visuomotor perturbations. It is well known that significant delay is present in neural conductance and processing; however, the possible representation of delay and adaptation to delayed visual feedback has been largely overlooked. Here we investigated the control of reaching movements in human subjects during an imposed visuomotor delay in a virtual reality environment. In the first experiment, when visual feedback was unexpectedly delayed, the hand movement overshot the end-point target, indicating a vision-based feedback control. Over the ensuing trials, movements gradually adapted and became accurate. When the delay was removed unexpectedly, movements systematically undershot the target, demonstrating that adaptation occurred within the vision-based feedback control mechanism. In a second experiment designed to broaden our understanding of the underlying mechanisms, we revealed similar after-effects for rhythmic reversal (out-and-back) movements. We present a computational model accounting for these results based on two adapted forward models, each tuned for a specific modality delay (proprioception or vision), and a third feedforward controller. The computational model, along with the experimental results, refutes delay representation in a pure forward vision-based predictor and suggests that adaptation occurred in the forward vision-based predictor, and concurrently in the state-based feedforward controller. Understanding how the brain compensates for conductance and processing delays is essential for understanding certain impairments concerning these neural delays as well as for the development of brain-machine interfaces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Platform for efficient switching between multiple devices in the intensive care unit.

    PubMed

    De Backere, F; Vanhove, T; Dejonghe, E; Feys, M; Herinckx, T; Vankelecom, J; Decruyenaere, J; De Turck, F

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.

  18. Energy-absorption capability and scalability of square cross section composite tube specimens

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1987-01-01

    Static crushing tests were conducted on graphite/epoxy and Kevlar/epoxy square cross section tubes to study the influence of specimen geometry on the energy-absorption capability and scalability of composite materials. The tube inside width-to-wall thickness (W/t) ratio was determined to significantly affect the energy-absorption capability of composite materials. As W/t ratio decreases, the energy-absorption capability increases nonlinearly. The energy-absorption capability of Kevlar epoxy tubes was found to be geometrically scalable, but the energy-absorption capability of graphite/epoxy tubes was not geometrically scalable.

  19. Disparity : scalable anomaly detection for clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, N.; Bradshaw, R.; Lusk, E.

    2008-01-01

    In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.

  20. Flexible Multi agent Algorithm for Distributed Decision Making

    DTIC Science & Technology

    2015-01-01

    How, J. P. Consensus - Based Auction Approaches for Decentralized task Assignment. Proceedings of the AIAA Guidance, Navigation, and Control...G. ; Kim, Y. Market- based Decentralized Task Assignment for Cooperative UA V Mission Including Rendezvous. Proceedings of the AIAA Guidance...scalable and adaptable to a variety of specific mission tasks . Additionally, the algorithm could easily be adapted for use on land or sea- based systems

  1. DSS range delay calibrations: Current performance level

    NASA Technical Reports Server (NTRS)

    Spradlin, G. L.

    1976-01-01

    A means for evaluating Deep Space Station (DSS) range delay calibration performance was developed. Inconsistencies frequently noted in these data are resolved. Development of the DSS range delay data base is described. The data base is presented with comments regarding apparent discontinuities. Data regarding the exciter frequency dependence of the delay values are presented. The improvement observed in the consistency of current DSS range delay calibration data over the performance previously observed is noted.

  2. Base drive circuit

    DOEpatents

    Lange, A.C.

    1995-04-04

    An improved base drive circuit having a level shifter for providing bistable input signals to a pair of non-linear delays. The non-linear delays provide gate control to a corresponding pair of field effect transistors through a corresponding pair of buffer components. The non-linear delays provide delayed turn-on for each of the field effect transistors while an associated pair of transistors shunt the non-linear delays during turn-off of the associated field effect transistor. 2 figures.

  3. On two new trends in evolvable hardware: employment of HDL-based structuring, and design of multi-functional circuits

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.; Guo, X.

    2002-01-01

    This paper comments on some directions of growth for evolvable hardware, proposes research directions that address the scalability problem and gives examples of results in novel areas approached by EHW.

  4. Modeling of delays in PKPD: classical approaches and a tutorial for delay differential equations.

    PubMed

    Koch, Gilbert; Krzyzanski, Wojciech; Pérez-Ruixo, Juan Jose; Schropp, Johannes

    2014-08-01

    In pharmacokinetics/pharmacodynamics (PKPD) the measured response is often delayed relative to drug administration, individuals in a population have a certain lifespan until they maturate or the change of biomarkers does not immediately affects the primary endpoint. The classical approach in PKPD is to apply transit compartment models (TCM) based on ordinary differential equations to handle such delays. However, an alternative approach to deal with delays are delay differential equations (DDE). DDEs feature additional flexibility and properties, realize more complex dynamics and can complementary be used together with TCMs. We introduce several delay based PKPD models and investigate mathematical properties of general DDE based models, which serve as subunits in order to build larger PKPD models. Finally, we review current PKPD software with respect to the implementation of DDEs for PKPD analysis.

  5. (DCT-FY08) Target Detection Using Multiple Modality Airborne and Ground Based Sensors

    DTIC Science & Technology

    2013-03-01

    Plenoptic modeling: an image-based rendering system,” in SIGGRAPH ’95: Proceedings of the 22nd annual conference on Computer graphics and interactive...techniques. New York, NY, USA: ACM, 1995, pp. 39–46. [21] D. G. Aliaga and I. Carlbom, “ Plenoptic stitching: a scalable method for reconstructing 3D

  6. Characterizing Elementary Teachers' Enactment of High-Leverage Practices through Engineering Design-Based Science Instruction

    ERIC Educational Resources Information Center

    Capobianco, Brenda M.; DeLisi, Jacqueline; Radloff, Jeffrey

    2018-01-01

    In an effort to document teachers' enactments of new reform in science teaching, valid and scalable measures of science teaching using engineering design are needed. This study describes the development and testing of an approach for documenting and characterizing elementary science teachers' multiday enactments of engineering design-based science…

  7. Scalable Integrated Multi-Mission Support System Simulator Release 3.0

    NASA Technical Reports Server (NTRS)

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.

  8. UpSetR: an R package for the visualization of intersecting sets and their properties

    PubMed Central

    Conway, Jake R.; Lex, Alexander; Gehlenborg, Nils

    2017-01-01

    Abstract Motivation: Venn and Euler diagrams are a popular yet inadequate solution for quantitative visualization of set intersections. A scalable alternative to Venn and Euler diagrams for visualizing intersecting sets and their properties is needed. Results: We developed UpSetR, an open source R package that employs a scalable matrix-based visualization to show intersections of sets, their size, and other properties. Availability and implementation: UpSetR is available at https://github.com/hms-dbmi/UpSetR/ and released under the MIT License. A Shiny app is available at https://gehlenborglab.shinyapps.io/upsetr/. Contact: nils@hms.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28645171

  9. Towards scalable Byzantine fault-tolerant replication

    NASA Astrophysics Data System (ADS)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  10. Object-oriented integrated approach for the design of scalable ECG systems.

    PubMed

    Boskovic, Dusanka; Besic, Ingmar; Avdagic, Zikrija

    2009-01-01

    The paper presents the implementation of Object-Oriented (OO) integrated approaches to the design of scalable Electro-Cardio-Graph (ECG) Systems. The purpose of this methodology is to preserve real-world structure and relations with the aim to minimize the information loss during the process of modeling, especially for Real-Time (RT) systems. We report on a case study of the design that uses the integration of OO and RT methods and the Unified Modeling Language (UML) standard notation. OO methods identify objects in the real-world domain and use them as fundamental building blocks for the software system. The gained experience based on the strongly defined semantics of the object model is discussed and related problems are analyzed.

  11. Rocket Engine Nozzle Side Load Transient Analysis Methodology: A Practical Approach

    NASA Technical Reports Server (NTRS)

    Shi, John J.

    2005-01-01

    During the development stage, in order to design/to size the rocket engine components and to reduce the risks, the local dynamic environments as well as dynamic interface loads must be defined. There are two kinds of dynamic environment, i.e. shock transients and steady-state random and sinusoidal vibration environments. Usually, the steady-state random and sinusoidal vibration environments are scalable, but the shock environments are not scalable. In other words, based on similarities only random vibration environments can be defined for a new engine. The methodology covered in this paper provides a way to predict the shock environments and the dynamic loads for new engine systems and new engine components in the early stage of new engine development or engine nozzle modifications.

  12. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    PubMed

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  13. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  14. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly

  15. Scalable and Manageable Storage Systems

    DTIC Science & Technology

    2000-12-01

    Despite our long- distance relationship, my brothers and sisters, Charfeddine, Amel, Ghazi, Hajer, Nabeel , and Ines overwhelmed me with more love and...that enable storage sys - tems to be more cost-effectively scalable. Furthermore, the dissertation proposes an approach to ensure automatic load...and addresses three key technical challenges to making storage sys - tems more cost-effectively scalable and manageable. 1.2 Dissertation research The

  16. Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels

    NASA Astrophysics Data System (ADS)

    Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching

    2006-12-01

    This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.

  17. Wanted: Scalable Tracers for Diffusion Measurements

    PubMed Central

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  18. Analyzing Double Delays at Newark Liberty International Airport

    NASA Technical Reports Server (NTRS)

    Evans, Antony D.; Lee, Paul

    2016-01-01

    When weather or congestion impacts the National Airspace System, multiple different Traffic Management Initiatives can be implemented, sometimes with unintended consequences. One particular inefficiency that is commonly identified is in the interaction between Ground Delay Programs (GDPs) and time based metering of internal departures, or TMA scheduling. Internal departures under TMA scheduling can take large GDP delays, followed by large TMA scheduling delays, because they cannot be easily fitted into the overhead stream. In this paper we examine the causes of these double delays through an analysis of arrival operations at Newark Liberty International Airport (EWR) from June to August 2010. Depending on how the double delay is defined between 0.3 percent and 0.8 percent of arrivals at EWR experienced double delays in this period. However, this represents between 21 percent and 62 percent of all internal departures in GDP and TMA scheduling. A deep dive into the data reveals that two causes of high internal departure scheduling delays are upstream flights making up time between their estimated departure clearance times (EDCTs) and entry into time based metering, which undermines the sequencing and spacing underlying the flight EDCTs, and high demand on TMA, when TMA airborne metering delays are high. Data mining methods (currently) including logistic regression, support vector machines and K-nearest neighbors are used to predict the occurrence of double delays and high internal departure scheduling delays with accuracies up to 0.68. So far, key indicators of double delay and high internal departure scheduling delay are TMA virtual runway queue size, and the degree to which estimated runway demand based on TMA estimated times of arrival has changed relative to the estimated runway demand based on EDCTs. However, more analysis is needed to confirm this.

  19. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology.

    PubMed

    Fu, Tian-Ming; Hong, Guosong; Viveros, Robert D; Zhou, Tao; Lieber, Charles M

    2017-11-21

    Implantable electrical probes have led to advances in neuroscience, brain-machine interfaces, and treatment of neurological diseases, yet they remain limited in several key aspects. Ideally, an electrical probe should be capable of recording from large numbers of neurons across multiple local circuits and, importantly, allow stable tracking of the evolution of these neurons over the entire course of study. Silicon probes based on microfabrication can yield large-scale, high-density recording but face challenges of chronic gliosis and instability due to mechanical and structural mismatch with the brain. Ultraflexible mesh electronics, on the other hand, have demonstrated negligible chronic immune response and stable long-term brain monitoring at single-neuron level, although, to date, it has been limited to 16 channels. Here, we present a scalable scheme for highly multiplexed mesh electronics probes to bridge the gap between scalability and flexibility, where 32 to 128 channels per probe were implemented while the crucial brain-like structure and mechanics were maintained. Combining this mesh design with multisite injection, we demonstrate stable 128-channel local field potential and single-unit recordings from multiple brain regions in awake restrained mice over 4 mo. In addition, the newly integrated mesh is used to validate stable chronic recordings in freely behaving mice. This scalable scheme for mesh electronics together with demonstrated long-term stability represent important progress toward the realization of ideal implantable electrical probes allowing for mapping and tracking single-neuron level circuit changes associated with learning, aging, and neurodegenerative diseases. Copyright © 2017 the Author(s). Published by PNAS.

  20. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  1. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  2. Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kalyan S.

    Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less

  3. Scalable interrogation: Eliciting human pheromone responses to deception in a security interview setting.

    PubMed

    Stedmon, Alex W; Eachus, Peter; Baillie, Les; Tallis, Huw; Donkor, Richard; Edlin-White, Robert; Bracewell, Robert

    2015-03-01

    Individuals trying to conceal knowledge from interrogators are likely to experience raised levels of stress that can manifest itself across biological, physiological, psychological and behavioural factors, providing an opportunity for detection. Using established research paradigms an innovative scalable interrogation was designed in which participants were given a 'token' that represented information they had to conceal from interviewers. A control group did not receive a token and therefore did not have to deceive the investigators. The aim of this investigation was to examine differences between deceivers and truth-tellers across the four factors by collecting data for cortisol levels, sweat samples, heart-rate, respiration, skin temperature, subjective stress ratings and video and audio recordings. The results provided an integrated understanding of responses to interrogation by those actively concealing information and those acting innocently. Of particular importance, the results also suggest, for the first time in an interrogation setting, that stressed individuals may secrete a volatile steroid based marker that could be used for stand-off detection. The findings are discussed in relation to developing a scalable interrogation protocol for future research in this area. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  5. Revisiting Parallel Cyclic Reduction and Parallel Prefix-Based Algorithms for Block Tridiagonal System of Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K; Perumalla, Kalyan S; Hirshman, Steven Paul

    2013-01-01

    Simulations that require solutions of block tridiagonal systems of equations rely on fast parallel solvers for runtime efficiency. Leading parallel solvers that are highly effective for general systems of equations, dense or sparse, are limited in scalability when applied to block tridiagonal systems. This paper presents scalability results as well as detailed analyses of two parallel solvers that exploit the special structure of block tridiagonal matrices to deliver superior performance, often by orders of magnitude. A rigorous analysis of their relative parallel runtimes is shown to reveal the existence of a critical block size that separates the parameter space spannedmore » by the number of block rows, the block size and the processor count, into distinct regions that favor one or the other of the two solvers. Dependence of this critical block size on the above parameters as well as on machine-specific constants is established. These formal insights are supported by empirical results on up to 2,048 cores of a Cray XT4 system. To the best of our knowledge, this is the highest reported scalability for parallel block tridiagonal solvers to date.« less

  6. Effect of asynchrony on numerical simulations of fluid flow phenomena

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya; Mahoney, Bryan; Donzis, Diego

    2015-11-01

    Designing scalable CFD codes on massively parallel computers is a challenge. This is mainly due to the large number of communications between processing elements (PEs) and their synchronization, leading to idling of PEs. Indeed, communication will likely be the bottleneck in the scalability of codes on Exascale machines. Our recent work on asynchronous computing for PDEs based on finite-differences has shown that it is possible to relax synchronization between PEs at a mathematical level. Computations then proceed regardless of the status of communication, reducing the idle time of PEs and improving the scalability. However, accuracy of the schemes is greatly affected. We have proposed asynchrony-tolerant (AT) schemes to address this issue. In this work, we study the effect of asynchrony on the solution of fluid flow problems using standard and AT schemes. We show that asynchrony creates additional scales with low energy content. The specific wavenumbers affected can be shown to be due to two distinct effects: the randomness in the arrival of messages and the corresponding switching between schemes. Understanding these errors allow us to effectively control them, rendering the method's feasibility in solving turbulent flows at realistic conditions on future computing systems.

  7. Scalable Nernst thermoelectric power using a coiled galfenol wire

    NASA Astrophysics Data System (ADS)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  8. A scalable nonlinear fluid-structure interaction solver based on a Schwarz preconditioner with isogeometric unstructured coarse spaces in 3D

    NASA Astrophysics Data System (ADS)

    Kong, Fande; Cai, Xiao-Chuan

    2017-07-01

    Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.

  9. Direct Laser Writing-Based Programmable Transfer Printing via Bioinspired Shape Memory Reversible Adhesive.

    PubMed

    Huang, Yin; Zheng, Ning; Cheng, Zhiqiang; Chen, Ying; Lu, Bingwei; Xie, Tao; Feng, Xue

    2016-12-28

    Flexible and stretchable electronics offer a wide range of unprecedented opportunities beyond conventional rigid electronics. Despite their vast promise, a significant bottleneck lies in the availability of a transfer printing technique to manufacture such devices in a highly controllable and scalable manner. Current technologies usually rely on manual stick-and-place and do not offer feasible mechanisms for precise and quantitative process control, especially when scalability is taken into account. Here, we demonstrate a spatioselective and programmable transfer strategy to print electronic microelements onto a soft substrate. The method takes advantage of automated direct laser writing to trigger localized heating of a micropatterned shape memory polymer adhesive stamp, allowing highly controlled and spatioselective switching of the interfacial adhesion. This, coupled to the proper tuning of the stamp properties, enables printing with perfect yield. The wide range adhesion switchability further allows printing of hybrid electronic elements, which is otherwise challenging given the complex interfacial manipulation involved. Our temperature-controlled transfer printing technique shows its critical importance and obvious advantages in the potential scale-up of device manufacturing. Our strategy opens a route to manufacturing flexible electronics with exceptional versatility and potential scalability.

  10. Electrospun core-shell fibers for robust silicon nanoparticle-based lithium ion battery anodes.

    PubMed

    Hwang, Tae Hoon; Lee, Yong Min; Kong, Byung-Seon; Seo, Jin-Seok; Choi, Jang Wook

    2012-02-08

    Because of its unprecedented theoretical capacity near 4000 mAh/g, which is approximately 10-fold larger compared to those of the current commercial graphite anodes, silicon has been the most promising anode for lithium ion batteries, particularly targeting large-scale energy storage applications including electrical vehicles and utility grids. Nevertheless, Si suffers from its short cycle life as well as the limitation for scalable electrode fabrication. Herein, we develop an electrospinning process to produce core-shell fiber electrodes using a dual nozzle in a scalable manner. In the core-shell fibers, commercially available nanoparticles in the core are wrapped by the carbon shell. The unique core-shell structure resolves various issues of Si anode operations, such as pulverization, vulnerable contacts between Si and carbon conductors, and an unstable sold-electrolyte interphase, thereby exhibiting outstanding cell performance: a gravimetric capacity as high as 1384 mAh/g, a 5 min discharging rate capability while retaining 721 mAh/g, and cycle life of 300 cycles with almost no capacity loss. The electrospun core-shell one-dimensional fibers suggest a new design principle for robust and scalable lithium battery electrodes suffering from volume expansion. © 2011 American Chemical Society

  11. CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, John; Edwards, Jim; Evans, Kate J

    2012-01-01

    The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most othermore » aspects of the dynamical core are a hybrid of well tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give a overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high resolution simulations of 1/4 degree, 1/8 degree, and T340 spectral truncation.« less

  12. TreeVector: scalable, interactive, phylogenetic trees for the web.

    PubMed

    Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian

    2010-01-28

    Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.

  13. Scalable Production of Biosensors Based on Aptamer-Functionalized Graphene for Detection of the HIV drug Tenofovir

    NASA Astrophysics Data System (ADS)

    Vishnubhotla, Ramya; Ping, Jinglei; Johnson, A. T. Charlie; Charlie Johnson Group Team

    Graphene field effect transistors (GFETs) are of great interest for biosensing applications, and have shown promising results for small molecular detection due to high sensitivity and electron mobility. We describe the fabrication of a scalable array of GFETs through traditional photolithography using lab-grown graphene via chemical vapor deposition (CVD) for drug detection with an all-electronic read-out. Sensor fabrication produced 52 devices per 2 x 2 cm area, with a yield of over 90%. Our biosensors use a commercially-obtained aptamer, verified to bind to graphene via AFM, to bind to the molecules of the drug Tenofovir, a medication currently used for HIV treatment, and have proven to detect concentrations at 1 ng/mL, 10^3 times lower than standard medical methods. We noted a concentration-dependent shift in the Dirac voltage for Tenofovir, and testing control drugs showed that the aptamer was only highly selective in binding to Tenofovir itself. These results are promising for potential clinical testing with urine samples, as our method is scalable and non-invasive. This work is funded by NIH through the Center for AIDS Research at the University of Pennsylvania. Center for AIDS Research at the University of Pennsylvania.

  14. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  15. A scalable nonlinear fluid–structure interaction solver based on a Schwarz preconditioner with isogeometric unstructured coarse spaces in 3D

    DOE PAGES

    Kong, Fande; Cai, Xiao-Chuan

    2017-03-24

    Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexactmore » Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here ''geometry'' includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.« less

  16. Adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy for three-dimensional visualization

    NASA Astrophysics Data System (ADS)

    Hayat, Khizar; Puech, William; Gesquière, Gilles

    2010-04-01

    We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.

  17. High gain solar photovoltaics

    NASA Astrophysics Data System (ADS)

    MacDonald, B.; Finot, M.; Heiken, B.; Trowbridge, T.; Ackler, H.; Leonard, L.; Johnson, E.; Chang, B.; Keating, T.

    2009-08-01

    Skyline Solar Inc. has developed a novel silicon-based PV system to simultaneously reduce energy cost and improve scalability of solar energy. The system achieves high gain through a combination of high capacity factor and optical concentration. The design approach drives innovation not only into the details of the system hardware, but also into manufacturing and deployment-related costs and bottlenecks. The result of this philosophy is a modular PV system whose manufacturing strategy relies only on currently existing silicon solar cell, module, reflector and aluminum parts supply chains, as well as turnkey PV module production lines and metal fabrication industries that already exist at enormous scale. Furthermore, with a high gain system design, the generating capacity of all components is multiplied, leading to a rapidly scalable system. The product design and commercialization strategy cooperate synergistically to promise dramatically lower LCOE with substantially lower risk relative to materials-intensive innovations. In this paper, we will present the key design aspects of Skyline's system, including aspects of the optical, mechanical and thermal components, revealing the ease of scalability, low cost and high performance. Additionally, we will present performance and reliability results on modules and the system, using ASTM and UL/IEC methodologies.

  18. Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kalyan S.

    2018-01-31

    Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less

  19. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.

    PubMed

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2013-11-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.

  20. Wafer-scalable high-performance CVD graphene devices and analog circuits

    NASA Astrophysics Data System (ADS)

    Tao, Li; Lee, Jongho; Li, Huifeng; Piner, Richard; Ruoff, Rodney; Akinwande, Deji

    2013-03-01

    Graphene field effect transistors (GFETs) will serve as an essential component for functional modules like amplifier and frequency doublers in analog circuits. The performance of these modules is directly related to the mobility of charge carriers in GFETs, which per this study has been greatly improved. Low-field electrostatic measurements show field mobility values up to 12k cm2/Vs at ambient conditions with our newly developed scalable CVD graphene. For both hole and electron transport, fabricated GFETs offer substantial amplification for small and large signals at quasi-static frequencies limited only by external capacitances at high-frequencies. GFETs biased at the peak transconductance point featured high small-signal gain with eventual output power compression similar to conventional transistor amplifiers. GFETs operating around the Dirac voltage afforded positive conversion gain for the first time, to our knowledge, in experimental graphene frequency doublers. This work suggests a realistic prospect for high performance linear and non-linear analog circuits based on the unique electron-hole symmetry and fast transport now accessible in wafer-scalable CVD graphene. *Support from NSF CAREER award (ECCS-1150034) and the W. M. Keck Foundation are appreicated.

  1. A flexible architecture for advanced process control solutions

    NASA Astrophysics Data System (ADS)

    Faron, Kamyar; Iourovitski, Ilia

    2005-05-01

    Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue Control Technologies has developed an advance service oriented architecture Run to Run Control System which addresses these requirements.

  2. Laser irradiation in water for the novel, scalable synthesis of black TiOx photocatalyst for environmental remediation

    PubMed Central

    Zimbone, Massimo; Boutinguiza, Mohamed; Privitera, Vittorio; Grimaldi, Maria Grazia

    2017-01-01

    Since 1970, TiO2 photocatalysis has been considered a possible alternative for sustainable water treatment. This is due to its material stability, abundance, nontoxicity and high activity. Unfortunately, its wide band gap (≈3.2 eV) in the UV portion of the spectrum makes it inefficient under solar illumination. Recently, so-called “black TiO2” has been proposed as a candidate to overcome this issue. However, typical synthesis routes require high hydrogen pressure and long annealing treatments. In this work, we present an industrially scalable synthesis of TiO2-based material based on laser irradiation. The resulting black TiOx shows a high activity and adsorbs visible radiation, overcoming the main concerns related to the use of TiO2 under solar irradiation. We employed a commercial high repetition rate green laser in order to synthesize a black TiOx layer and we demonstrate the scalability of the present methodology. The photocatalyst is composed of a nanostructured titanate film (TiOx) synthetized on a titanium foil, directly back-contacted to a layer of Pt nanoparticles (PtNps) deposited on the rear side of the same foil. The result is a monolithic photochemical diode with a stacked, layered structure (TiOx/Ti/PtNps). The resulting high photo-efficiency is ascribed to both the scavenging of electrons by Pt nanoparticles and the presence of trap surface states for holes in an amorphous hydrogenated TiOx layer. PMID:28243557

  3. Novel scalable 3D cell based model for in vitro neurotoxicity testing: Combining human differentiated neurospheres with gene expression and functional endpoints.

    PubMed

    Terrasso, Ana Paula; Pinto, Catarina; Serra, Margarida; Filipe, Augusto; Almeida, Susana; Ferreira, Ana Lúcia; Pedroso, Pedro; Brito, Catarina; Alves, Paula Marques

    2015-07-10

    There is an urgent need for new in vitro strategies to identify neurotoxic agents with speed, reliability and respect for animal welfare. Cell models should include distinct brain cell types and represent brain microenvironment to attain higher relevance. The main goal of this study was to develop and validate a human 3D neural model containing both neurons and glial cells, applicable for toxicity testing in high-throughput platforms. To achieve this, a scalable bioprocess for neural differentiation of human NTera2/cl.D1 cells in stirred culture systems was developed. Endpoints based on neuronal- and astrocytic-specific gene expression and functionality in 3D were implemented in multi-well format and used for toxicity assessment. The prototypical neurotoxicant acrylamide affected primarily neurons, impairing synaptic function; our results suggest that gene expression of the presynaptic marker synaptophysin can be used as sensitive endpoint. Chloramphenicol, described as neurotoxicant affected both cell types, with cytoskeleton markers' expression significantly reduced, particularly in astrocytes. In conclusion, a scalable and reproducible process for production of differentiated neurospheres enriched in mature neurons and functional astrocytes was obtained. This 3D approach allowed efficient production of large numbers of human differentiated neurospheres, which in combination with gene expression and functional endpoints are a powerful cell model to evaluate human neuronal and astrocytic toxicity. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Cloud Computing in Support of Synchronized Disaster Response Operations

    DTIC Science & Technology

    2010-09-01

    scalable, Web application based on cloud computing technologies to facilitate communication between a broad range of public and private entities without...requiring them to compromise security or competitive advantage. The proposed design applies the unique benefits of cloud computing architectures such as

  5. A Collaborative Web-Based Architecture For Sharing ToxCast Data

    EPA Science Inventory

    Collaborative Drug Discovery (CDD) has created a scalable platform that combines traditional drug discovery informatics with Web2.0 features. Traditional drug discovery capabilities include substructure, similarity searching and export to excel or sdf formats. Web2.0 features inc...

  6. A comprehensive review of prehospital and in-hospital delay times in acute stroke care.

    PubMed

    Evenson, K R; Foraker, R E; Morris, D L; Rosamond, W D

    2009-06-01

    The purpose of this study was to systematically review and summarize prehospital and in-hospital stroke evaluation and treatment delay times. We identified 123 unique peer-reviewed studies published from 1981 to 2007 of prehospital and in-hospital delay time for evaluation and treatment of patients with stroke, transient ischemic attack, or stroke-like symptoms. Based on studies of 65 different population groups, the weighted Poisson regression indicated a 6.0% annual decline (P<0.001) in hours/year for prehospital delay, defined from symptom onset to emergency department arrival. For in-hospital delay, the weighted Poisson regression models indicated no meaningful changes in delay time from emergency department arrival to emergency department evaluation (3.1%, P=0.49 based on 12 population groups). There was a 10.2% annual decline in hours/year from emergency department arrival to neurology evaluation or notification (P=0.23 based on 16 population groups) and a 10.7% annual decline in hours/year for delay time from emergency department arrival to initiation of computed tomography (P=0.11 based on 23 population groups). Only one study reported on times from arrival to computed tomography scan interpretation, two studies on arrival to drug administration, and no studies on arrival to transfer to an in-patient setting, precluding generalizations. Prehospital delay continues to contribute the largest proportion of delay time. The next decade provides opportunities to establish more effective community-based interventions worldwide. It will be crucial to have effective stroke surveillance systems in place to better understand and improve both prehospital and in-hospital delays for acute stroke care.

  7. Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics.

    PubMed

    Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Jaung, Jae Yun; Kim, Yong-Hoon; Park, Sung Kyu

    2015-09-28

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.

  8. Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics

    NASA Astrophysics Data System (ADS)

    Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Yun Jaung, Jae; Kim, Yong-Hoon; Kyu Park, Sung

    2015-09-01

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.

  9. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    NASA Astrophysics Data System (ADS)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  10. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  11. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  12. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  13. miTRATA: a web-based tool for microRNA Truncation and Tailing Analysis.

    PubMed

    Patel, Parth; Ramachandruni, S Deepthi; Kakrana, Atul; Nakano, Mayumi; Meyers, Blake C

    2016-02-01

    We describe miTRATA, the first web-based tool for microRNA Truncation and Tailing Analysis--the analysis of 3' modifications of microRNAs including the loss or gain of nucleotides relative to the canonical sequence. miTRATA is implemented in Python (version 3) and employs parallel processing modules to enhance its scalability when analyzing multiple small RNA (sRNA) sequencing datasets. It utilizes miRBase, currently version 21, as a source of known microRNAs for analysis. miTRATA notifies user(s) via email to download as well as visualize the results online. miTRATA's strengths lie in (i) its biologist-focused web interface, (ii) improved scalability via parallel processing and (iii) its uniqueness as a webtool to perform microRNA truncation and tailing analysis. miTRATA is developed in Python and PHP. It is available as a web-based application from https://wasabi.dbi.udel.edu/∼apps/ta/. meyers@dbi.udel.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Automated characterization and assembly of individual nanowires for device fabrication.

    PubMed

    Yu, Kaiyan; Yi, Jingang; Shan, Jerry W

    2018-05-15

    The automated sorting and positioning of nanowires and nanotubes is essential to enabling the scalable manufacturing of nanodevices for a variety of applications. However, two fundamental challenges still remain: (i) automated placement of individual nanostructures in precise locations, and (ii) the characterization and sorting of highly variable nanomaterials to construct well-controlled nanodevices. Here, we propose and demonstrate an integrated, electric-field based method for the simultaneous automated characterization, manipulation, and assembly of nanowires (ACMAN) with selectable electrical conductivities into nanodevices. We combine contactless and solution-based electro-orientation spectroscopy and electrophoresis-based motion-control, planning and manipulation strategies to simultaneously characterize and manipulate multiple individual nanowires. These nanowires can be selected according to their electrical characteristics and precisely positioned at different locations in a low-conductivity liquid to form functional nanodevices with desired electrical properties. We validate the ACMAN design by assembling field-effect transistors (FETs) with silicon nanowires of selected electrical conductivities. The design scheme provides a key enabling technology for the scalable, automated sorting and assembly of nanowires and nanotubes to build functional nanodevices.

  15. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  16. Conceptual Architecture for Obtaining Cyber Situational Awareness

    DTIC Science & Technology

    2014-06-01

    1-893723-17-8. [10] SKYBOX SECURITY. Developer´s Guide. Skybox View. Manual.Version 11. 2010. [11] SCALABLE Network. EXata communications...E. Understanding command and control. Washington, D.C.: CCRP Publication Series, 2006. 255 p. ISBN 1-893723-17-8. • [10] SKYBOX SECURITY. Developer...s Guide. Skybox View. Manual.Version 11. 2010. • [11] SCALABLE Network. EXata communications simulation platform. Available: <http://www.scalable

  17. Scalable Power-Component Models for Concept Testing

    DTIC Science & Technology

    2011-08-17

    Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release 2011 NDIA GROUND VEHICLE...Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release Page 2 of 8 technology that has yet...Technology Symposium (GVSETS) Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release

  18. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  19. Scalable Options for Extended Skill Building Following Didactic Training in Cognitive-Behavioral Therapy for Anxious Youth: A Pilot Randomized Trial.

    PubMed

    Chu, Brian C; Carpenter, Aubrey L; Wyszynski, Christopher M; Conklin, Phoebe H; Comer, Jonathan S

    2017-01-01

    A sizable gap exists between the availability of evidence-based psychological treatments and the number of community therapists capable of delivering such treatments. Limited time, resources, and access to experts prompt the need for easily disseminable, lower cost options for therapist training and continued support beyond initial training. A pilot randomized trial tested scalable extended support models for therapists following initial training. Thirty-five postdegree professionals (43%) or graduate trainees (57%) from diverse disciplines viewed an initial web-based training in cognitive-behavioral therapy (CBT) for youth anxiety and then were randomly assigned to 10 weeks of expert streaming (ES; viewing weekly online supervision sessions of an expert providing consultation), peer consultation (PC; non-expert-led group discussions of CBT), or fact sheet self-study (FS; weekly review of instructional fact sheets). In initial expectations, trainees rated PC as more appropriate and useful to meet its goals than either ES or FS. At post, all support programs were rated as equally satisfactory and useful for therapists' work, and comparable in increasing self-reported use of CBT strategies (b = .19, p = .02). In contrast, negative linear trends were found on a knowledge quiz (b = -1.23, p = .01) and self-reported beliefs about knowledge (b = -1.50, p < .001) and skill (b = -1.15, p < .001). Attrition and poor attendance presented a moderate concern for PC, and ES was rated as having the lowest implementation potential. Preliminary findings encourage further development of low-cost, scalable options for continued support of evidence-based training.

  20. Population-scale three-dimensional reconstruction and quantitative profiling of microglia arbors

    PubMed Central

    Rey-Villamizar, Nicolas; Merouane, Amine; Lu, Yanbin; Mukherjee, Amit; Trett, Kristen; Chong, Peter; Harris, Carolyn; Shain, William; Roysam, Badrinath

    2015-01-01

    Motivation: The arbor morphologies of brain microglia are important indicators of cell activation. This article fills the need for accurate, robust, adaptive and scalable methods for reconstructing 3-D microglial arbors and quantitatively mapping microglia activation states over extended brain tissue regions. Results: Thick rat brain sections (100–300 µm) were multiplex immunolabeled for IBA1 and Hoechst, and imaged by step-and-image confocal microscopy with automated 3-D image mosaicing, producing seamless images of extended brain regions (e.g. 5903 × 9874 × 229 voxels). An over-complete dictionary-based model was learned for the image-specific local structure of microglial processes. The microglial arbors were reconstructed seamlessly using an automated and scalable algorithm that exploits microglia-specific constraints. This method detected 80.1 and 92.8% more centered arbor points, and 53.5 and 55.5% fewer spurious points than existing vesselness and LoG-based methods, respectively, and the traces were 13.1 and 15.5% more accurate based on the DIADEM metric. The arbor morphologies were quantified using Scorcioni’s L-measure. Coifman’s harmonic co-clustering revealed four morphologically distinct classes that concord with known microglia activation patterns. This enabled us to map spatial distributions of microglial activation and cell abundances. Availability and implementation: Experimental protocols, sample datasets, scalable open-source multi-threaded software implementation (C++, MATLAB) in the electronic supplement, and website (www.farsight-toolkit.org). http://www.farsight-toolkit.org/wiki/Population-scale_Three-dimensional_Reconstruction_and_Quanti-tative_Profiling_of_Microglia_Arbors Contact: broysam@central.uh.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25701570

  1. Global exponential stability of inertial memristor-based neural networks with time-varying delays and impulses.

    PubMed

    Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong

    2017-11-01

    In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays.

    PubMed

    Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and distributed delay (mixed delays). By means of a simple feedback controller and novel finite time synchronization analysis methods, several new criteria are derived to ensure the finite time synchronization of MCGNNs with mixed delays. The obtained criteria are very concise and easy to verify. Numerical simulations are presented to demonstrate the effectiveness of our theoretical results.

  3. Computing Gröbner and Involutive Bases for Linear Systems of Difference Equations

    NASA Astrophysics Data System (ADS)

    Yanovich, Denis

    2018-02-01

    The computation of involutive bases and Gröbner bases for linear systems of difference equations is solved and its importance for physical and mathematical problems is discussed. The algorithm and issues concerning its implementation in C are presented and calculation times are compared with the competing programs. The paper ends with consideration on the parallel version of this implementation and its scalability.

  4. Delay-based virtual congestion control in multi-tenant datacenters

    NASA Astrophysics Data System (ADS)

    Liu, Yuxin; Zhu, Danhong; Zhang, Dong

    2018-03-01

    With the evolution of cloud computing and virtualization, the congestion control of virtual datacenters has become the basic issue for multi-tenant datacenters transmission. Regarding to the friendly conflict of heterogeneous congestion control among multi-tenant, this paper proposes a delay-based virtual congestion control, which translates the multi-tenant heterogeneous congestion control into delay-based feedback uniformly by setting the hypervisor translation layer, modifying three-way handshake of explicit feedback and packet loss feedback and throttling receive window. The simulation results show that the delay-based virtual congestion control can effectively solve the unfairness of heterogeneous feedback congestion control algorithms.

  5. Scalable Production of Graphene-Based Wearable E-Textiles

    PubMed Central

    2017-01-01

    Graphene-based wearable e-textiles are considered to be promising due to their advantages over traditional metal-based technology. However, the manufacturing process is complex and currently not suitable for industrial scale application. Here we report a simple, scalable, and cost-effective method of producing graphene-based wearable e-textiles through the chemical reduction of graphene oxide (GO) to make stable reduced graphene oxide (rGO) dispersion which can then be applied to the textile fabric using a simple pad-dry technique. This application method allows the potential manufacture of conductive graphene e-textiles at commercial production rates of ∼150 m/min. The graphene e-textile materials produced are durable and washable with acceptable softness/hand feel. The rGO coating enhanced the tensile strength of cotton fabric and also the flexibility due to the increase in strain% at maximum load. We demonstrate the potential application of these graphene e-textiles for wearable electronics with activity monitoring sensor. This could potentially lead to a multifunctional single graphene e-textile garment that can act both as sensors and flexible heating elements powered by the energy stored in graphene textile supercapacitors. PMID:29185706

  6. A Proposed Scalable Design and Simulation of Wireless Sensor Network-Based Long-Distance Water Pipeline Leakage Monitoring System

    PubMed Central

    Almazyad, Abdulaziz S.; Seddiq, Yasser M.; Alotaibi, Ahmed M.; Al-Nasheri, Ahmed Y.; BenSaleh, Mohammed S.; Obeid, Abdulfattah M.; Qasim, Syed Manzoor

    2014-01-01

    Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation. PMID:24561404

  7. Game-powered machine learning

    PubMed Central

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-01-01

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the “wisdom of the crowds.” Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., “funky jazz with saxophone,” “spooky electronica,” etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data. PMID:22460786

  8. Game-powered machine learning.

    PubMed

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-04-24

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.

  9. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less

  10. A proposed scalable design and simulation of wireless sensor network-based long-distance water pipeline leakage monitoring system.

    PubMed

    Almazyad, Abdulaziz S; Seddiq, Yasser M; Alotaibi, Ahmed M; Al-Nasheri, Ahmed Y; BenSaleh, Mohammed S; Obeid, Abdulfattah M; Qasim, Syed Manzoor

    2014-02-20

    Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.

  11. Scalability Analysis and Use of Compression at the Goddard DAAC and End-to-End MODIS Transfers

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.

    1998-01-01

    The goal of this task is to analyze the performance of single and multiple FTP transfer between SCF's and the Goddard DAAC. We developed an analytic model to compute the performance of FTP sessions as a function of various key parameters, implemented the model as a program called FTP Analyzer, and carried out validations with real data obtained by running single and multiple FTP transfer between GSFC and the Miami SCF. The input parameters to the model include the mix to FTP sessions (scenario), and for each FTP session, the file size. The network parameters include the round trip time, packet loss rate, the limiting bandwidth of the network connecting the SCF to a DAAC, TCP's basic timeout, TCP's Maximum Segment Size, and TCP's Maximum Receiver's Window Size. The modeling approach used consisted of modeling TCP's overall throughput, computing TCP's delay per FTP transfer, and then solving a queuing network model that includes the FTP clients and servers.

  12. Identifying the Root Causes of Wait States in Large-Scale Parallel Applications

    DOE PAGES

    Böhme, David; Geimer, Markus; Arnold, Lukas; ...

    2016-07-20

    Driven by growing application requirements and accelerated by current trends in microprocessor design, the number of processor cores on modern supercomputers is increasing from generation to generation. However, load or communication imbalance prevents many codes from taking advantage of the available parallelism, as delays of single processes may spread wait states across the entire machine. Moreover, when employing complex point-to-point communication patterns, wait states may propagate along far-reaching cause-effect chains that are hard to track manually and that complicate an assessment of the actual costs of an imbalance. Building on earlier work by Meira Jr. et al., we present amore » scalable approach that identifies program wait states and attributes their costs in terms of resource waste to their original cause. Ultimately, by replaying event traces in parallel both forward and backward, we can identify the processes and call paths responsible for the most severe imbalances even for runs with hundreds of thousands of processes.« less

  13. Identifying the Root Causes of Wait States in Large-Scale Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Böhme, David; Geimer, Markus; Arnold, Lukas

    Driven by growing application requirements and accelerated by current trends in microprocessor design, the number of processor cores on modern supercomputers is increasing from generation to generation. However, load or communication imbalance prevents many codes from taking advantage of the available parallelism, as delays of single processes may spread wait states across the entire machine. Moreover, when employing complex point-to-point communication patterns, wait states may propagate along far-reaching cause-effect chains that are hard to track manually and that complicate an assessment of the actual costs of an imbalance. Building on earlier work by Meira Jr. et al., we present amore » scalable approach that identifies program wait states and attributes their costs in terms of resource waste to their original cause. Ultimately, by replaying event traces in parallel both forward and backward, we can identify the processes and call paths responsible for the most severe imbalances even for runs with hundreds of thousands of processes.« less

  14. Measurement-induced entanglement for excitation stored in remote atomic ensembles.

    PubMed

    Chou, C W; de Riedmatten, H; Felinto, D; Polyakov, S V; van Enk, S J; Kimble, H J

    2005-12-08

    A critical requirement for diverse applications in quantum information science is the capability to disseminate quantum resources over complex quantum networks. For example, the coherent distribution of entangled quantum states together with quantum memory (for storing the states) can enable scalable architectures for quantum computation, communication and metrology. Here we report observations of entanglement between two atomic ensembles located in distinct, spatially separated set-ups. Quantum interference in the detection of a photon emitted by one of the samples projects the otherwise independent ensembles into an entangled state with one joint excitation stored remotely in 10(5) atoms at each site. After a programmable delay, we confirm entanglement by mapping the state of the atoms to optical fields and measuring mutual coherences and photon statistics for these fields. We thereby determine a quantitative lower bound for the entanglement of the joint state of the ensembles. Our observations represent significant progress in the ability to distribute and store entangled quantum states.

  15. Reliable Radiation Hybrid Maps: An Efficient Scalable Clustering-based Approach

    USDA-ARS?s Scientific Manuscript database

    The process of mapping markers from radiation hybrid mapping (RHM) experiments is equivalent to the traveling salesman problem and, thereby, has combinatorial complexity. As an additional problem, experiments typically result in some unreliable markers that reduce the overall quality of the map. We ...

  16. High-performance graphene-based supercapacitors made by a scalable blade-coating approach

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Liu, Jinzhang; Mirri, Francesca; Pasquali, Matteo; Motta, Nunzio; Holmes, John W.

    2016-04-01

    Graphene oxide (GO) sheets can form liquid crystals (LCs) in their aqueous dispersions that are more viscous with a stronger LC feature. In this work we combine the viscous LC-GO solution with the blade-coating technique to make GO films, for constructing graphene-based supercapacitors in a scalable way. Reduced GO (rGO) films are prepared by wet chemical methods, using either hydrazine (HZ) or hydroiodic acid (HI). Solid-state supercapacitors with rGO films as electrodes and highly conductive carbon nanotube films as current collectors are fabricated and the capacitive properties of different rGO films are compared. It is found that the HZ-rGO film is superior to the HI-rGO film in achieving high capacitance, owing to the 3D structure of graphene sheets in the electrode. Compared to gelled electrolyte, the use of liquid electrolyte (H2SO4) can further increase the capacitance to 265 F per gram (corresponding to 52 mF per cm2) of the HZ-rGO film.

  17. Ceph-based storage services for Run2 and beyond

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Lamanna, Massimo; Mascetti, Luca; Peters, Andreas J.; Rousseau, Hervé

    2015-12-01

    In 2013, CERN IT evaluated then deployed a petabyte-scale Ceph cluster to support OpenStack use-cases in production. With now more than a year of smooth operations, we will present our experience and tuning best-practices. Beyond the cloud storage use-cases, we have been exploring Ceph-based services to satisfy the growing storage requirements during and after Run2. First, we have developed a Ceph back-end for CASTOR, allowing this service to deploy thin disk server nodes which act as gateways to Ceph; this feature marries the strong data archival and cataloging features of CASTOR with the resilient and high performance Ceph subsystem for disk. Second, we have developed RADOSFS, a lightweight storage API which builds a POSIX-like filesystem on top of the Ceph object layer. When combined with Xrootd, RADOSFS can offer a scalable object interface compatible with our HEP data processing applications. Lastly the same object layer is being used to build a scalable and inexpensive NFS service for several user communities.

  18. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  19. High-efficiency and air-stable P3HT-based polymer solar cells with a new non-fullerene acceptor

    PubMed Central

    Holliday, Sarah; Ashraf, Raja Shahid; Wadsworth, Andrew; Baran, Derya; Yousaf, Syeda Amber; Nielsen, Christian B.; Tan, Ching-Hong; Dimitrov, Stoichko D.; Shang, Zhengrong; Gasparini, Nicola; Alamoudi, Maha; Laquai, Frédéric; Brabec, Christoph J.; Salleo, Alberto; Durrant, James R.; McCulloch, Iain

    2016-01-01

    Solution-processed organic photovoltaics (OPV) offer the attractive prospect of low-cost, light-weight and environmentally benign solar energy production. The highest efficiency OPV at present use low-bandgap donor polymers, many of which suffer from problems with stability and synthetic scalability. They also rely on fullerene-based acceptors, which themselves have issues with cost, stability and limited spectral absorption. Here we present a new non-fullerene acceptor that has been specifically designed to give improved performance alongside the wide bandgap donor poly(3-hexylthiophene), a polymer with significantly better prospects for commercial OPV due to its relative scalability and stability. Thanks to the well-matched optoelectronic and morphological properties of these materials, efficiencies of 6.4% are achieved which is the highest reported for fullerene-free P3HT devices. In addition, dramatically improved air stability is demonstrated relative to other high-efficiency OPV, showing the excellent potential of this new material combination for future technological applications. PMID:27279376

  20. Highly sensitive MoS2 photodetectors with graphene contacts

    NASA Astrophysics Data System (ADS)

    Han, Peize; St. Marie, Luke; Wang, Qing X.; Quirk, Nicholas; El Fatimy, Abdel; Ishigami, Masahiro; Barbara, Paola

    2018-05-01

    Two-dimensional materials such as graphene and transition metal dichalcogenides (TMDs) are ideal candidates to create ultra-thin electronics suitable for flexible substrates. Although optoelectronic devices based on TMDs have demonstrated remarkable performance, scalability is still a significant issue. Most devices are created using techniques that are not suitable for mass production, such as mechanical exfoliation of monolayer flakes and patterning by electron-beam lithography. Here we show that large-area MoS2 grown by chemical vapor deposition and patterned by photolithography yields highly sensitive photodetectors, with record shot-noise-limited detectivities of 8.7 × 1014 Jones in ambient condition and even higher when sealed with a protective layer. These detectivity values are higher than the highest values reported for photodetectors based on exfoliated MoS2. We study MoS2 devices with gold electrodes and graphene electrodes. The devices with graphene electrodes have a tunable band alignment and are especially attractive for scalable ultra-thin flexible optoelectronics.

  1. Evaluation of a “Field Cage” for Electric Field Control in GaN-Based HEMTs That Extends the Scalability of Breakdown Into the kV Regime

    DOE PAGES

    Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan; ...

    2017-08-16

    A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less

  2. Scalable and efficient separation of hydrogen isotopes using graphene-based electrochemical pumping

    NASA Astrophysics Data System (ADS)

    Lozada-Hidalgo, M.; Zhang, S.; Hu, S.; Esfandiar, A.; Grigorieva, I. V.; Geim, A. K.

    2017-05-01

    Thousands of tons of isotopic mixtures are processed annually for heavy-water production and tritium decontamination. The existing technologies remain extremely energy intensive and require large capital investments. New approaches are needed to reduce the industry's footprint. Recently, micrometre-size crystals of graphene are shown to act as efficient sieves for hydrogen isotopes pumped through graphene electrochemically. Here we report a fully-scalable approach, using graphene obtained by chemical vapour deposition, which allows a proton-deuteron separation factor of around 8, despite cracks and imperfections. The energy consumption is projected to be orders of magnitude smaller with respect to existing technologies. A membrane based on 30 m2 of graphene, a readily accessible amount, could provide a heavy-water output comparable to that of modern plants. Even higher efficiency is expected for tritium separation. With no fundamental obstacles for scaling up, the technology's simplicity, efficiency and green credentials call for consideration by the nuclear and related industries.

  3. Radio frequency measurements of tunnel couplings and singlet–triplet spin states in Si:P quantum dots

    PubMed Central

    House, M. G.; Kobayashi, T.; Weber, B.; Hile, S. J.; Watson, T. F.; van der Heijden, J.; Rogge, S.; Simmons, M. Y.

    2015-01-01

    Spin states of the electrons and nuclei of phosphorus donors in silicon are strong candidates for quantum information processing applications given their excellent coherence times. Designing a scalable donor-based quantum computer will require both knowledge of the relationship between device geometry and electron tunnel couplings, and a spin readout strategy that uses minimal physical space in the device. Here we use radio frequency reflectometry to measure singlet–triplet states of a few-donor Si:P double quantum dot and demonstrate that the exchange energy can be tuned by at least two orders of magnitude, from 20 μeV to 8 meV. We measure dot–lead tunnel rates by analysis of the reflected signal and show that they change from 100 MHz to 22 GHz as the number of electrons on a quantum dot is increased from 1 to 4. These techniques present an approach for characterizing, operating and engineering scalable qubit devices based on donors in silicon. PMID:26548556

  4. Scalable Production of Mechanically Robust Antireflection Film for Omnidirectional Enhanced Flexible Thin Film Solar Cells.

    PubMed

    Wang, Min; Ma, Pengsha; Yin, Min; Lu, Linfeng; Lin, Yinyue; Chen, Xiaoyuan; Jia, Wei; Cao, Xinmin; Chang, Paichun; Li, Dongdong

    2017-09-01

    Antireflection (AR) at the interface between the air and incident window material is paramount to boost the performance of photovoltaic devices. 3D nanostructures have attracted tremendous interest to reduce reflection, while the structure is vulnerable to the harsh outdoor environment. Thus the AR film with improved mechanical property is desirable in an industrial application. Herein, a scalable production of flexible AR films is proposed with microsized structures by roll-to-roll imprinting process, which possesses hydrophobic property and much improved robustness. The AR films can be potentially used for a wide range of photovoltaic devices whether based on rigid or flexible substrates. As a demonstration, the AR films are integrated with commercial Si-based triple-junction thin film solar cells. The AR film works as an effective tool to control the light travel path and utilize the light inward more efficiently by exciting hybrid optical modes, which results in a broadband and omnidirectional enhanced performance.

  5. Highly sensitive MoS2 photodetectors with graphene contacts.

    PubMed

    Han, Peize; St Marie, Luke; Wang, Qing X; Quirk, Nicholas; El Fatimy, Abdel; Ishigami, Masahiro; Barbara, Paola

    2018-05-18

    Two-dimensional materials such as graphene and transition metal dichalcogenides (TMDs) are ideal candidates to create ultra-thin electronics suitable for flexible substrates. Although optoelectronic devices based on TMDs have demonstrated remarkable performance, scalability is still a significant issue. Most devices are created using techniques that are not suitable for mass production, such as mechanical exfoliation of monolayer flakes and patterning by electron-beam lithography. Here we show that large-area MoS 2 grown by chemical vapor deposition and patterned by photolithography yields highly sensitive photodetectors, with record shot-noise-limited detectivities of 8.7 × 10 14 Jones in ambient condition and even higher when sealed with a protective layer. These detectivity values are higher than the highest values reported for photodetectors based on exfoliated MoS 2 . We study MoS 2 devices with gold electrodes and graphene electrodes. The devices with graphene electrodes have a tunable band alignment and are especially attractive for scalable ultra-thin flexible optoelectronics.

  6. Robust resistive memory devices using solution-processable metal-coordinated azo aromatics

    NASA Astrophysics Data System (ADS)

    Goswami, Sreetosh; Matula, Adam J.; Rath, Santi P.; Hedström, Svante; Saha, Surajit; Annamalai, Meenakshi; Sengupta, Debabrata; Patra, Abhijeet; Ghosh, Siddhartha; Jani, Hariom; Sarkar, Soumya; Motapothula, Mallikarjuna Rao; Nijhuis, Christian A.; Martin, Jens; Goswami, Sreebrata; Batista, Victor S.; Venkatesan, T.

    2017-12-01

    Non-volatile memories will play a decisive role in the next generation of digital technology. Flash memories are currently the key player in the field, yet they fail to meet the commercial demands of scalability and endurance. Resistive memory devices, and in particular memories based on low-cost, solution-processable and chemically tunable organic materials, are promising alternatives explored by the industry. However, to date, they have been lacking the performance and mechanistic understanding required for commercial translation. Here we report a resistive memory device based on a spin-coated active layer of a transition-metal complex, which shows high reproducibility (~350 devices), fast switching (<=30 ns), excellent endurance (~1012 cycles), stability (>106 s) and scalability (down to ~60 nm2). In situ Raman and ultraviolet-visible spectroscopy alongside spectroelectrochemistry and quantum chemical calculations demonstrate that the redox state of the ligands determines the switching states of the device whereas the counterions control the hysteresis. This insight may accelerate the technological deployment of organic resistive memories.

  7. The BioIntelligence Framework: a new computational platform for biomedical knowledge computing

    PubMed Central

    Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles

    2013-01-01

    Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information. PMID:22859646

  8. Scalable Production of Mechanically Robust Antireflection Film for Omnidirectional Enhanced Flexible Thin Film Solar Cells

    PubMed Central

    Wang, Min; Ma, Pengsha; Lu, Linfeng; Lin, Yinyue; Chen, Xiaoyuan; Jia, Wei; Cao, Xinmin; Chang, Paichun

    2017-01-01

    Antireflection (AR) at the interface between the air and incident window material is paramount to boost the performance of photovoltaic devices. 3D nanostructures have attracted tremendous interest to reduce reflection, while the structure is vulnerable to the harsh outdoor environment. Thus the AR film with improved mechanical property is desirable in an industrial application. Herein, a scalable production of flexible AR films is proposed with microsized structures by roll‐to‐roll imprinting process, which possesses hydrophobic property and much improved robustness. The AR films can be potentially used for a wide range of photovoltaic devices whether based on rigid or flexible substrates. As a demonstration, the AR films are integrated with commercial Si‐based triple‐junction thin film solar cells. The AR film works as an effective tool to control the light travel path and utilize the light inward more efficiently by exciting hybrid optical modes, which results in a broadband and omnidirectional enhanced performance. PMID:28932667

  9. Mining algorithm for association rules in big data based on Hadoop

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Wang, Xiaojing; Zhang, Lijun; Qiao, Liying

    2018-04-01

    In order to solve the problem that the traditional association rules mining algorithm has been unable to meet the mining needs of large amount of data in the aspect of efficiency and scalability, take FP-Growth as an example, the algorithm is realized in the parallelization based on Hadoop framework and Map Reduce model. On the basis, it is improved using the transaction reduce method for further enhancement of the algorithm's mining efficiency. The experiment, which consists of verification of parallel mining results, comparison on efficiency between serials and parallel, variable relationship between mining time and node number and between mining time and data amount, is carried out in the mining results and efficiency by Hadoop clustering. Experiments show that the paralleled FP-Growth algorithm implemented is able to accurately mine frequent item sets, with a better performance and scalability. It can be better to meet the requirements of big data mining and efficiently mine frequent item sets and association rules from large dataset.

  10. Evaluation of a “Field Cage” for Electric Field Control in GaN-Based HEMTs That Extends the Scalability of Breakdown Into the kV Regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan

    A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less

  11. Coupled Physics Environment (CouPE) library - Design, Implementation, and Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay S.

    Over several years, high fidelity, validated mono-­physics solvers with proven scalability on peta-­scale architectures have been developed independently. Based on a unified component-­based architecture, these existing codes can be coupled with a unified mesh-­data backplane and a flexible coupling-­strategy-­based driver suite to produce a viable tool for analysts. In this report, we present details on the design decisions and developments on CouPE, an acronym that stands for Coupled Physics Environment that orchestrates a coupled physics solver through the interfaces exposed by MOAB array-­based unstructured mesh, both of which are part of SIGMA (Scalable Interfaces for Geometry and Mesh-­Based Applications) toolkit.more » The SIGMA toolkit contains libraries that enable scalable geometry and unstructured mesh creation and handling in a memory and computationally efficient implementation. The CouPE version being prepared for a full open-­source release along with updated documentation will contain several useful examples that will enable users to start developing their applications natively using the native MOAB mesh and couple their models to existing physics applications to analyze and solve real world problems of interest. An integrated multi-­physics simulation capability for the design and analysis of current and future nuclear reactor models is also being investigated as part of the NEAMS RPL, to tightly couple neutron transport, thermal-­hydraulics and structural mechanics physics under the SHARP framework. This report summarizes the efforts that have been invested in CouPE to bring together several existing physics applications namely PROTEUS (neutron transport code), Nek5000 (computational fluid-dynamics code) and Diablo (structural mechanics code). The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The design of CouPE along with motivations that led to implementation choices are also discussed. The first release of the library will be different from the current version of the code that integrates the components in SHARP and explanation on the need for forking the source base will also be provided. Enhancements in the functionality and improved user guides will be available as part of the release. CouPE v0.1 is scheduled for an open-­source release in December 2014 along with SIGMA v1.1 components that provide support for language-agnostic mesh loading, traversal and query interfaces along with scalable solution transfer of fields between different physics codes. The coupling methodology and software interfaces of the library are presented, along with verification studies on two representative fast sodium-­cooled reactor demonstration problems to prove the usability of the CouPE library.« less

  12. Power-Scalable Blue-Green Bessel Beams

    DTIC Science & Technology

    2016-02-23

    19b. TELEPHONE NUMBER (Include area code) 02/23/2016 Final Technical JAN 2011 - DEC 2013 Power-Scalable Blue -Green Bessel Beams Siddharth Ramachandran...fiber lasers, non-traditional emission wavelengths, high-power blue -green tunabel lasers U U U SAR 11 Siddharth Ramachandran 617-353-9811 1 Power...Scalable Blue -Green Bessel Beams Siddharth Ramachandran Photonics Center, Boston University, 8 Saint Mary’s Street, Boston, MA 02215 phone: (617) 353

  13. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  14. Modular Universal Scalable Ion-trap Quantum Computer

    DTIC Science & Technology

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  15. Control-based method to identify underlying delays of a nonlinear dynamical system.

    PubMed

    Yu, Dongchuan; Frasca, Mattia; Liu, Fang

    2008-10-01

    We suggest several stationary state control-based delay identification methods which do not require any structural information about the controlled systems and are applicable to systems described by delayed ordinary differential equations. This proposed technique includes three steps: (i) driving a system to a steady state; (ii) perturbing the control signal for shifting the steady state; and (iii) identifying all delays by detecting the time that the system is abruptly drawn out of stationarity. Some aspects especially important for applications are discussed as well, including interaction delay identification, stationary state convergence speed, performance comparison, and the influence of noise on delay identification. Several examples are presented to illustrate the reliability and robustness of all delay identification methods suggested.

  16. BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark

    PubMed Central

    Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung

    2016-01-01

    Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389

  17. Electrical delay line multiplexing for pulsed mode radiation detectors

    NASA Astrophysics Data System (ADS)

    Vinke, Ruud; Yeom, Jung Yeol; Levin, Craig S.

    2015-04-01

    Medical imaging systems are composed of a large number of position sensitive radiation detectors to provide high resolution imaging. For example, whole-body Positron Emission Tomography (PET) systems are typically composed of thousands of scintillation crystal elements, which are coupled to photosensors. Thus, PET systems greatly benefit from methods to reduce the number of data acquisition channels, in order to reduce the system development cost and complexity. In this paper we present an electrical delay line multiplexing scheme that can significantly reduce the number of readout channels, while preserving the signal integrity required for good time resolution performance. We experimented with two 4 × 4 LYSO crystal arrays, with crystal elements having 3 mm × 3 mm × 5 mm and 3 mm × 3 mm × 20 mm dimensions, coupled to 16 Hamamatsu MPPC S10931-050P SiPM elements. Results show that each crystal could be accurately identified, even in the presence of scintillation light sharing and inter-crystal Compton scatter among neighboring crystal elements. The multiplexing configuration degraded the coincidence timing resolution from ∼243 ps FWHM to ∼272 ps FWHM when 16 SiPM signals were combined into a single channel for the 4 × 4 LYSO crystal array with 3 mm × 3 mm × 20 mm crystal element dimensions, in coincidence with a 3 mm × 3 mm × 5 mm LYSO crystal pixel. The method is flexible to allow multiplexing configurations across different block detectors, and is scalable to an entire ring of detectors.

  18. Scalable alignment and transfer of nanowires in a Spinning Langmuir Film.

    PubMed

    Zhu, Ren; Lai, Yicong; Nguyen, Vu; Yang, Rusen

    2014-10-21

    Many nanomaterial-based integrated nanosystems require the assembly of nanowires and nanotubes into ordered arrays. A generic alignment method should be simple and fast for the proof-of-concept study by a researcher, and low-cost and scalable for mass production in industries. Here we have developed a novel Spinning-Langmuir-Film technique to fulfill both requirements. We used surfactant-enhanced shear flow to align inorganic and organic nanowires, which could be easily transferred to other substrates and ready for device fabrication in less than 20 minutes. The aligned nanowire areal density can be controlled in a wide range from 16/mm(-2) to 258/mm(-2), through the compression of the film. The surface surfactant layer significantly influences the quality of alignment and has been investigated in detail.

  19. Efficient population-scale variant analysis and prioritization with VAPr.

    PubMed

    Birmingham, Amanda; Mark, Adam M; Mazzaferro, Carlo; Xu, Guorong; Fisch, Kathleen M

    2018-04-06

    With the growing availability of population-scale whole-exome and whole-genome sequencing, demand for reproducible, scalable variant analysis has spread within genomic research communities. To address this need, we introduce the Python package VAPr (Variant Analysis and Prioritization). VAPr leverages existing annotation tools ANNOVAR and MyVariant.info with MongoDB-based flexible storage and filtering functionality. It offers biologists and bioinformatics generalists easy-to-use and scalable analysis and prioritization of genomic variants from large cohort studies. VAPr is developed in Python and is available for free use and extension under the MIT License. An install package is available on PyPi at https://pypi.python.org/pypi/VAPr, while source code and extensive documentation are on GitHub at https://github.com/ucsd-ccbb/VAPr. kfisch@ucsd.edu.

  20. Scalable total synthesis and comprehensive structure-activity relationship studies of the phytotoxin coronatine.

    PubMed

    Littleson, Mairi M; Baker, Christopher M; Dalençon, Anne J; Frye, Elizabeth C; Jamieson, Craig; Kennedy, Alan R; Ling, Kenneth B; McLachlan, Matthew M; Montgomery, Mark G; Russell, Claire J; Watson, Allan J B

    2018-03-16

    Natural phytotoxins are valuable starting points for agrochemical design. Acting as a jasmonate agonist, coronatine represents an attractive herbicidal lead with novel mode of action, and has been an important synthetic target for agrochemical development. However, both restricted access to quantities of coronatine and a lack of a suitably scalable and flexible synthetic approach to its constituent natural product components, coronafacic and coronamic acids, has frustrated development of this target. Here, we report gram-scale production of coronafacic acid that allows a comprehensive structure-activity relationship study of this target. Biological assessment of a >120 member library combined with computational studies have revealed the key determinants of potency, rationalising hypotheses held for decades, and allowing future rational design of new herbicidal leads based on this template.

Top