NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola
2014-02-10
We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.
A scalable, self-analyzing digital locking system for use on quantum optics experiments.
Sparkes, B M; Chrzanowski, H M; Parrain, D P; Buchler, B C; Lam, P K; Symul, T
2011-07-01
Digital control of optics experiments has many advantages over analog control systems, specifically in terms of the scalability, cost, flexibility, and the integration of system information into one location. We present a digital control system, freely available for download online, specifically designed for quantum optics experiments that allows for automatic and sequential re-locking of optical components. We show how the inbuilt locking analysis tools, including a white-noise network analyzer, can be used to help optimize individual locks, and verify the long term stability of the digital system. Finally, we present an example of the benefits of digital locking for quantum optics by applying the code to a specific experiment used to characterize optical Schrödinger cat states.
NASA Technical Reports Server (NTRS)
Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.
1994-01-01
A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.
Li, Bo; Wang, Xin; Jung, Hyun Young; Kim, Young Lae; Robinson, Jeremy T.; Zalalutdinov, Maxim; Hong, Sanghyun; Hao, Ji; Ajayan, Pulickel M.; Wan, Kai-Tak; Jung, Yung Joon
2015-01-01
Suspended single-walled carbon nanotubes (SWCNTs) offer unique functionalities for electronic and electromechanical systems. Due to their outstanding flexible nature, suspended SWCNT architectures have great potential for integration into flexible electronic systems. However, current techniques for integrating SWCNT architectures with flexible substrates are largely absent, especially in a manner that is both scalable and well controlled. Here, we present a new nanostructured transfer paradigm to print scalable and well-defined suspended nano/microscale SWCNT networks on 3D patterned flexible substrates with micro- to nanoscale precision. The underlying printing/transfer mechanism, as well as the mechanical, electromechanical, and mechanical resonance properties of the suspended SWCNTs are characterized, including identifying metrics relevant for reliable and sensitive device structures. Our approach represents a fast, scalable and general method for building suspended nano/micro SWCNT architectures suitable for flexible sensing and actuation systems. PMID:26511284
Li, Bo; Wang, Xin; Jung, Hyun Young; Kim, Young Lae; Robinson, Jeremy T; Zalalutdinov, Maxim; Hong, Sanghyun; Hao, Ji; Ajayan, Pulickel M; Wan, Kai-Tak; Jung, Yung Joon
2015-10-29
Suspended single-walled carbon nanotubes (SWCNTs) offer unique functionalities for electronic and electromechanical systems. Due to their outstanding flexible nature, suspended SWCNT architectures have great potential for integration into flexible electronic systems. However, current techniques for integrating SWCNT architectures with flexible substrates are largely absent, especially in a manner that is both scalable and well controlled. Here, we present a new nanostructured transfer paradigm to print scalable and well-defined suspended nano/microscale SWCNT networks on 3D patterned flexible substrates with micro- to nanoscale precision. The underlying printing/transfer mechanism, as well as the mechanical, electromechanical, and mechanical resonance properties of the suspended SWCNTs are characterized, including identifying metrics relevant for reliable and sensitive device structures. Our approach represents a fast, scalable and general method for building suspended nano/micro SWCNT architectures suitable for flexible sensing and actuation systems.
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Scalable digital hardware for a trapped ion quantum computer
NASA Astrophysics Data System (ADS)
Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang
2016-12-01
Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits
NASA Astrophysics Data System (ADS)
Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng
2017-11-01
We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
2014-05-01
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Scalable cluster administration - Chiba City I approach and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, J. P.; Evard, R.; Nurmi, D.
2002-07-01
Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less
Power-rate-distortion analysis for wireless video communication under energy constraint
NASA Astrophysics Data System (ADS)
He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq
2004-01-01
In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.
A scalable quantum computer with ions in an array of microtraps
Cirac; Zoller
2000-04-06
Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times).
Distributed numerical controllers
NASA Astrophysics Data System (ADS)
Orban, Peter E.
2001-12-01
While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-08-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
A scalable healthcare information system based on a service-oriented architecture.
Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei
2011-06-01
Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.
Kosa, Gergely; Vuoristo, Kiira S; Horn, Svein Jarle; Zimmermann, Boris; Afseth, Nils Kristian; Kohler, Achim; Shapaval, Volha
2018-06-01
Recent developments in molecular biology and metabolic engineering have resulted in a large increase in the number of strains that need to be tested, positioning high-throughput screening of microorganisms as an important step in bioprocess development. Scalability is crucial for performing reliable screening of microorganisms. Most of the scalability studies from microplate screening systems to controlled stirred-tank bioreactors have been performed so far with unicellular microorganisms. We have compared cultivation of industrially relevant oleaginous filamentous fungi and microalga in a Duetz-microtiter plate system to benchtop and pre-pilot bioreactors. Maximal glucose consumption rate, biomass concentration, lipid content of the biomass, biomass, and lipid yield values showed good scalability for Mucor circinelloides (less than 20% differences) and Mortierella alpina (less than 30% differences) filamentous fungi. Maximal glucose consumption and biomass production rates were identical for Crypthecodinium cohnii in microtiter plate and benchtop bioreactor. Most likely due to shear stress sensitivity of this microalga in stirred bioreactor, biomass concentration and lipid content of biomass were significantly higher in the microtiter plate system than in the benchtop bioreactor. Still, fermentation results obtained in the Duetz-microtiter plate system for Crypthecodinium cohnii are encouraging compared to what has been reported in literature. Good reproducibility (coefficient of variation less than 15% for biomass growth, glucose consumption, lipid content, and pH) were achieved in the Duetz-microtiter plate system for Mucor circinelloides and Crypthecodinium cohnii. Mortierella alpina cultivation reproducibility might be improved with inoculation optimization. In conclusion, we have presented suitability of the Duetz-microtiter plate system for the reproducible, scalable, and cost-efficient high-throughput screening of oleaginous microorganisms.
Autonomous Energy Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey
With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less
Scalable architecture for a room temperature solid-state quantum information processor.
Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D
2012-04-24
The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.
NASA Technical Reports Server (NTRS)
Huang, Adam
2016-01-01
The goal of the Solid State Inflation Balloon Active Deorbiter project is to develop and demonstrate a scalable, simple, reliable, and low-cost active deorbiting system capable of controlling the downrange point of impact for the full-range of small satellites from 1 kg to 180 kg. The key enabling technology being developed is the Solid State Gas Generator (SSGG) chip, generating pure nitrogen gas from sodium azide (NaN3) micro-crystals. Coupled with a metalized nonelastic drag balloon, the complete Solid State Inflation Balloon (SSIB) system is capable of repeated inflation/deflation cycles. The SSGG minimizes size, weight, electrical power, and cost when compared to the current state of the art.
Semantically Enhanced Online Configuration of Feedback Control Schemes.
Milis, Georgios M; Panayiotou, Christos G; Polycarpou, Marios M
2018-03-01
Recent progress toward the realization of the "Internet of Things" has improved the ability of physical and soft/cyber entities to operate effectively within large-scale, heterogeneous systems. It is important that such capacity be accompanied by feedback control capabilities sufficient to ensure that the overall systems behave according to their specifications and meet their functional objectives. To achieve this, such systems require new architectures that facilitate the online deployment, composition, interoperability, and scalability of control system components. Most current control systems lack scalability and interoperability because their design is based on a fixed configuration of specific components, with knowledge of their individual characteristics only implicitly passed through the design. This paper addresses the need for flexibility when replacing components or installing new components, which might occur when an existing component is upgraded or when a new application requires a new component, without the need to readjust or redesign the overall system. A semantically enhanced feedback control architecture is introduced for a class of systems, aimed at accommodating new components into a closed-loop control framework by exploiting the semantic inference capabilities of an ontology-based knowledge model. This architecture supports continuous operation of the control system, a crucial property for large-scale systems for which interruptions have negative impact on key performance metrics that may include human comfort and welfare or economy costs. A case-study example from the smart buildings domain is used to illustrate the proposed architecture and semantic inference mechanisms.
Scalable and expressive medical terminologies.
Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J
1996-01-01
The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.
A scalable and flexible hybrid energy storage system design and implementation
NASA Astrophysics Data System (ADS)
Kim, Younghyun; Koh, Jason; Xie, Qing; Wang, Yanzhi; Chang, Naehyuck; Pedram, Massoud
2014-06-01
Energy storage systems (ESS) are becoming one of the most important components that noticeably change overall system performance in various applications, ranging from the power grid infrastructure to electric vehicles (EV) and portable electronics. However, a homogeneous ESS is subject to limited characteristics in terms of cost, efficiency, lifetime, etc., by the energy storage technology that comprises the ESS. On the other hand, hybrid ESS (HESS) are a viable solution for a practical ESS with currently available technologies as they have potential to overcome such limitations by exploiting only advantages of heterogeneous energy storage technologies while hiding their drawbacks. However, the HESS concept basically mandates sophisticated design and control to actually make the benefits happen. The HESS architecture should be able to provide controllability of many parts, which are often fixed in homogeneous ESS, and novel management policies should be able to utilize the control features. This paper introduces a complete design practice of a HESS prototype to demonstrate scalability, flexibility, and energy efficiency. It is composed of three heterogenous energy storage elements: lead-acid batteries, lithium-ion batteries, and supercapacitors. We demonstrate a novel system control methodology and enhanced energy efficiency through this design practice.
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie
2008-10-01
During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.
Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren
2018-04-16
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.
Development of a scalable generic platform for adaptive optics real time control
NASA Astrophysics Data System (ADS)
Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar
2015-06-01
The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.
Scalable quantum memory in the ultrastrong coupling regime.
Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C
2015-03-02
Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.
Scalable quantum memory in the ultrastrong coupling regime
Kyaw, T. H.; Felicetti, S.; Romero, G.; Solano, E.; Kwek, L.-C.
2015-01-01
Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances. PMID:25727251
Unequal error control scheme for dimmable visible light communication systems
NASA Astrophysics Data System (ADS)
Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan
2017-01-01
Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.
A TCP/IP framework for ethernet-based measurement, control and experiment data distribution
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Minny, J.
2010-11-01
A complete modular but scalable TCP/IP based scientific instrument control and data distribution system has been designed and realized. The system features an IEEE 802.3 compliant 10 Mbps Medium Access Controller (MAC) and Physical Layer Device that is suitable for the full-duplex monitoring and control of various physically widespread measurement transducers in the presence of a local network infrastructure. The cumbersomeness of exchanging and synchronizing data between the various transducer units using physical storage media led to the choice of TCP/IP as a logical alternative. The system and methods developed are scalable for broader usage over the Internet. The system comprises a PIC18f2620 and ENC28j60 based hardware and a software component written in C, Java/Javascript and Visual Basic.NET programming languages for event-level monitoring and browser user-interfaces respectively. The system exchanges data with the host network through IPv4 packets requested and received on a HTTP page. It also responds to ICMP echo, UDP and ARP requests through a user selectable integrated DHCP and static IPv4 address allocation scheme. The round-trip time, throughput and polling frequency are estimated and reported. A typical application to temperature monitoring and logging is also presented.
Proton beam therapy control system
Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA
2008-07-08
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana
2010-09-21
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-06-25
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-12-03
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Zhou, Weizheng; Tong, Gangsheng; Wang, Dali; Zhu, Bangshang; Ren, Yu; Butler, Michael; Pelan, Eddie; Yan, Deyue; Zhu, Xinyuan; Stoyanov, Simeon D
2016-04-06
Hierarchical porous structures are ubiquitous in biological organisms and inorganic systems. Although such structures have been replicated, designed, and fabricated, they are often inferior to naturally occurring analogues. Apart from the complexity and multiple functionalities developed by the biological systems, the controllable and scalable production of hierarchically porous structures and building blocks remains a technological challenge. Herein, a facile and scalable approach is developed to fabricate hierarchical hollow spheres with integrated micro-, meso-, and macropores ranging from 1 nm to 100 μm (spanning five orders of magnitude). (Macro)molecules, micro-rods (which play a key role for the creation of robust capsules), and emulsion droplets have been successfully employed as multiple length scale templates, allowing the creation of hierarchical porous macrospheres. Thanks to their specific mechanical strength, these hierarchical porous spheres could be incorporated and assembled as higher level building blocks in various novel materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
FPGA implementation of a configurable neuromorphic CPG-based locomotion controller.
Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar
2013-09-01
Neuromorphic engineering is a discipline devoted to the design and development of computational hardware that mimics the characteristics and capabilities of neuro-biological systems. In recent years, neuromorphic hardware systems have been implemented using a hybrid approach incorporating digital hardware so as to provide flexibility and scalability at the cost of power efficiency and some biological realism. This paper proposes an FPGA-based neuromorphic-like embedded system on a chip to generate locomotion patterns of periodic rhythmic movements inspired by Central Pattern Generators (CPGs). The proposed implementation follows a top-down approach where modularity and hierarchy are two desirable features. The locomotion controller is based on CPG models to produce rhythmic locomotion patterns or gaits for legged robots such as quadrupeds and hexapods. The architecture is configurable and scalable for robots with either different morphologies or different degrees of freedom (DOFs). Experiments performed on a real robot are presented and discussed. The obtained results demonstrate that the CPG-based controller provides the necessary flexibility to generate different rhythmic patterns at run-time suitable for adaptable locomotion. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Simulation of Read-time Scalable Coherent Interface
NASA Technical Reports Server (NTRS)
Li, Qiang; Grant, Terry; Grover, Radhika S.
1997-01-01
Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.
Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale
NASA Astrophysics Data System (ADS)
Canali, L.; Baranowski, Z.; Kothuri, P.
2017-10-01
This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.
The TOTEM DAQ based on the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio
2018-02-01
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.
Technology for On-Chip Qubit Control with Microfabricated Surface Ion Traps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Highstrete, Clark; Scott, Sean Michael; Nordquist, Christopher D.
2013-11-01
Trapped atomic ions are a leading physical system for quantum information processing. However, scalability and operational fidelity remain limiting technical issues often associated with optical qubit control. One promising approach is to develop on-chip microwave electronic control of ion qubits based on the atomic hyperfine interaction. This project developed expertise and capabilities at Sandia toward on-chip electronic qubit control in a scalable architecture. The project developed a foundation of laboratory capabilities, including trapping the 171Yb + hyperfine ion qubit and developing an experimental microwave coherent control capability. Additionally, the project investigated the integration of microwave device elements with surface ionmore » traps utilizing Sandia’s state-of-the-art MEMS microfabrication processing. This effort culminated in a device design for a multi-purpose ion trap experimental platform for investigating on-chip microwave qubit control, laying the groundwork for further funded R&D to develop on-chip microwave qubit control in an architecture that is suitable to engineering development.« less
NASA Astrophysics Data System (ADS)
Oswiecinska, A.; Hibbs, J.; Zajic, I.; Burnham, K. J.
2015-11-01
This paper presents conceptual control solution for reliable and energy efficient operation of heating, ventilation and air conditioning (HVAC) systems used in large volume building applications, e.g. warehouse facilities or exhibition centres. Advanced two-level scalable control solution, designed to extend capabilities of the existing low-level control strategies via remote internet connection, is presented. The high-level, supervisory controller is based on Model Predictive Control (MPC) architecture, which is the state-of-the-art for indoor climate control systems. The innovative approach benefits from using passive heating and cooling control strategies for reducing the HVAC system operational costs, while ensuring that required environmental conditions are met.
Scalable Multiprocessor for High-Speed Computing in Space
NASA Technical Reports Server (NTRS)
Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard
2004-01-01
A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.
2004-12-01
handling using the X10 home automation protocol. Each 3D graphics client renders its scene according to an assigned virtual camera position. By having...control protocol. DMX is a versatile and robust framework which overcomes limitations of the X10 home automation protocol which we are currently using
Two-dimensional beam steering using a thermo-optic silicon photonic optical phased array
NASA Astrophysics Data System (ADS)
Rabinovich, William S.; Goetz, Peter G.; Pruessner, Marcel W.; Mahon, Rita; Ferraro, Mike S.; Park, Doe; Fleet, Erin; DePrenger, Michael J.
2016-11-01
Many components for free-space optical (FSO) communication systems have shrunken in size over the last decade. However, the steering systems have remained large and power hungry. Nonmechanical beam steering offers a path to reducing the size of these systems. Optical phased arrays can allow integrated beam steering elements. One of the most important aspects of an optical phased array technology is its scalability to a large number of elements. Silicon photonics can potentially offer this scalability using CMOS foundry techniques. A phased array that can steer in two dimensions using the thermo-optic effect is demonstrated. No wavelength tuning of the input laser is needed and the design allows a simple control system with only two inputs. A benchtop FSO link with the phased array in both transmit and receive mode is demonstrated.
Real-Time Optimization in Complex Stochastic Environment
2015-06-24
simpler ones, thus addressing scalability and the limited resources of networked wireless devices. This, however, comes at the expense of increased...Maximization of Wireless Sensor Networks with Non-ideal Batteries”, IEEE Trans. on Control of Network Systems, Vol. 1, 1, pp. 86-98, 2014. [27...C.G., “Optimal Energy-Efficient Downlink Transmission Scheduling for Real-Time Wireless Networks ”, subm. to IEEE Trans. on Control of Network Systems
A New Design Method of Automotive Electronic Real-time Control System
NASA Astrophysics Data System (ADS)
Zuo, Wenying; Li, Yinguo; Wang, Fengjuan; Hou, Xiaobo
Structure and functionality of automotive electronic control system is becoming more and more complex. The traditional manual programming development mode to realize automotive electronic control system can't satisfy development needs. So, in order to meet diversity and speedability of development of real-time control system, combining model-based design approach and auto code generation technology, this paper proposed a new design method of automotive electronic control system based on Simulink/RTW. Fristly, design algorithms and build a control system model in Matlab/Simulink. Then generate embedded code automatically by RTW and achieve automotive real-time control system development in OSEK/VDX operating system environment. The new development mode can significantly shorten the development cycle of automotive electronic control system, improve program's portability, reusability and scalability and had certain practical value for the development of real-time control system.
1998 IEEE Aerospace Conference. Proceedings.
NASA Astrophysics Data System (ADS)
The following topics were covered: science frontiers and aerospace; flight systems technologies; spacecraft attitude determination and control; space power systems; smart structures and dynamics; military avionics; electronic packaging; MEMS; hyperspectral remote sensing for GVP; space laser technology; pointing, control, tracking and stabilization technologies; payload support technologies; protection technologies; 21st century space mission management and design; aircraft flight testing; aerospace test and evaluation; small satellites and enabling technologies; systems design optimisation; advanced launch vehicles; GPS applications and technologies; antennas and radar; software and systems engineering; scalable systems; communications; target tracking applications; remote sensing; advanced sensors; and optoelectronics.
Rezaeibagha, Fatemeh; Win, Khin Than; Susilo, Willy
Even though many safeguards and policies for electronic health record (EHR) security have been implemented, barriers to the privacy and security protection of EHR systems persist. This article presents the results of a systematic literature review regarding frequently adopted security and privacy technical features of EHR systems. Our inclusion criteria were full articles that dealt with the security and privacy of technical implementations of EHR systems published in English in peer-reviewed journals and conference proceedings between 1998 and 2013; 55 selected studies were reviewed in detail. We analysed the review results using two International Organization for Standardization (ISO) standards (29100 and 27002) in order to consolidate the study findings. Using this process, we identified 13 features that are essential to security and privacy in EHRs. These included system and application access control, compliance with security requirements, interoperability, integration and sharing, consent and choice mechanism, policies and regulation, applicability and scalability and cryptography techniques. This review highlights the importance of technical features, including mandated access control policies and consent mechanisms, to provide patients' consent, scalability through proper architecture and frameworks, and interoperability of health information systems, to EHR security and privacy requirements.
Providing scalable system software for high-end simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, D.
1997-12-31
Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.
Shen, Yiwen; Hattink, Maarten; Samadi, Payman; ...
2018-04-13
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yiwen; Hattink, Maarten; Samadi, Payman
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less
A scalable and continuous-upgradable optical wireless and wired convergent access network.
Sung, J Y; Cheng, K T; Chow, C W; Yeh, C H; Pan, C-L
2014-06-02
In this work, a scalable and continuous upgradable convergent optical access network is proposed. By using a multi-wavelength coherent comb source and a programmable waveshaper at the central office (CO), optical millimeter-wave (mm-wave) signals of different frequencies (from baseband to > 100 GHz) can be generated. Hence, it provides a scalable and continuous upgradable solution for end-user who needs 60 GHz wireless services now and > 100 GHz wireless services in the future. During the upgrade, user only needs to upgrade their optical networking unit (ONU). A programmable waveshaper is used to select the suitable optical tones with wavelength separation equals to the desired mm-wave frequency; while the CO remains intact. The centralized characteristics of the proposed system can easily add any new service and end-user. The centralized control of the wavelength makes the system more stable. Wired data rate of 17.45 Gb/s and w-band wireless data rate up to 3.36 Gb/s were demonstrated after transmission over 40 km of single-mode fiber (SMF).
Distributed controller clustering in software defined networks.
Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.
Linear Time Algorithms to Restrict Insider Access using Multi-Policy Access Control Systems
Mell, Peter; Shook, James; Harang, Richard; Gavrila, Serban
2017-01-01
An important way to limit malicious insiders from distributing sensitive information is to as tightly as possible limit their access to information. This has always been the goal of access control mechanisms, but individual approaches have been shown to be inadequate. Ensemble approaches of multiple methods instantiated simultaneously have been shown to more tightly restrict access, but approaches to do so have had limited scalability (resulting in exponential calculations in some cases). In this work, we take the Next Generation Access Control (NGAC) approach standardized by the American National Standards Institute (ANSI) and demonstrate its scalability. The existing publicly available reference implementations all use cubic algorithms and thus NGAC was widely viewed as not scalable. The primary NGAC reference implementation took, for example, several minutes to simply display the set of files accessible to a user on a moderately sized system. In our approach, we take these cubic algorithms and make them linear. We do this by reformulating the set theoretic approach of the NGAC standard into a graph theoretic approach and then apply standard graph algorithms. We thus can answer important access control decision questions (e.g., which files are available to a user and which users can access a file) using linear time graph algorithms. We also provide a default linear time mechanism to visualize and review user access rights for an ensemble of access control mechanisms. Our visualization appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. It also provide an implicit mechanism for symbolic linking that provides a powerful access capability. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. This may help transition from concept to reality the idea of using ensembles of simultaneously instantiated access control methodologies, thereby limiting insider threat. PMID:28758045
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Scalable Deployment of Advanced Building Energy Management Systems
2013-06-01
Building Automation and Control Network BDAS Building Data Acquisition System BEM building energy model BIM building information modeling BMS...A prototype toolkit to seamlessly and automatically transfer a Building Information Model ( BIM ) to a Building Energy Model (BEM) has been...circumvent the need to manually construct and maintain a detailed building energy simulation model . This detailed
Multiple Flow Loop SCADA System Implemented on the Production Prototype Loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baily, Scott A.; Dalmas, Dale Allen; Wheat, Robert Mitchell
2015-11-16
The following report covers FY 15 activities to develop supervisory control and data acquisition (SCADA) system for the Northstar Moly99 production prototype gas flow loop. The goal of this effort is to expand the existing system to include a second flow loop with a larger production-sized blower. Besides testing the larger blower, this system will demonstrate the scalability of our solution to multiple flow loops.
Freezable Radiator Coupon Testing and Full Scale Radiator Design
NASA Technical Reports Server (NTRS)
Lillibridge, Sean T.; Guinn, John; Cognata, Thomas; Navarro, Moses
2009-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the space craft s surroundings and because of different thermal loads during different mission phases. However, freezing and thawing (recovering) a radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. This paper summarizes tests on three test articles that were performed to further empirically quantify the behavior of a simple freezable radiator, and the culmination of those tests into a full scale design. Each test article explored the bounds of freezing and recovery behavior, as well as providing thermo-physical data of the working fluid, a 50-50 mixture of DowFrost HD and water. These results were then used as a tool for developing correlated thermal model in Thermal Desktop which could be used for modeling the behavior of a full scale thermal control system for a lunar mission. The final design of a thermal control system for a lunar mission is also documented in this paper.
BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing
NASA Astrophysics Data System (ADS)
Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie
Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
CICS Region Virtualization for Cost Effective Application Development
ERIC Educational Resources Information Center
Khan, Kamal Waris
2012-01-01
Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…
NASA Astrophysics Data System (ADS)
Wang, Shuguang; Zhou, Tong; Li, Dehui; Zhong, Zhenyang
2016-06-01
The scalable array of ordered nano-pillars with precisely controllable quantum nanostructures (QNs) are ideal candidates for the exploration of the fundamental features of cavity quantum electrodynamics. It also has a great potential in the applications of innovative nano-optoelectronic devices for the future quantum communication and integrated photon circuits. Here, we present a synthesis of such hybrid system in combination of the nanosphere lithography and the self-assembly during heteroepitaxy. The precise positioning and controllable evolution of self-assembled Ge QNs, including quantum dot necklace(QDN), QD molecule(QDM) and quantum ring(QR), on Si nano-pillars are readily achieved. Considering the strain relaxation and the non-uniform Ge growth due to the thickness-dependent and anisotropic surface diffusion of adatoms on the pillars, the comprehensive scenario of the Ge growth on Si pillars is discovered. It clarifies the inherent mechanism underlying the controllable growth of the QNs on the pillar. Moreover, it inspires a deliberate two-step growth procedure to engineer the controllable QNs on the pillar. Our results pave a promising avenue to the achievement of desired nano-pillar-QNs system that facilitates the strong light-matter interaction due to both spectra and spatial coupling between the QNs and the cavity modes of a single pillar and the periodic pillars.
Wang, Shuguang; Zhou, Tong; Li, Dehui; Zhong, Zhenyang
2016-01-01
The scalable array of ordered nano-pillars with precisely controllable quantum nanostructures (QNs) are ideal candidates for the exploration of the fundamental features of cavity quantum electrodynamics. It also has a great potential in the applications of innovative nano-optoelectronic devices for the future quantum communication and integrated photon circuits. Here, we present a synthesis of such hybrid system in combination of the nanosphere lithography and the self-assembly during heteroepitaxy. The precise positioning and controllable evolution of self-assembled Ge QNs, including quantum dot necklace(QDN), QD molecule(QDM) and quantum ring(QR), on Si nano-pillars are readily achieved. Considering the strain relaxation and the non-uniform Ge growth due to the thickness-dependent and anisotropic surface diffusion of adatoms on the pillars, the comprehensive scenario of the Ge growth on Si pillars is discovered. It clarifies the inherent mechanism underlying the controllable growth of the QNs on the pillar. Moreover, it inspires a deliberate two-step growth procedure to engineer the controllable QNs on the pillar. Our results pave a promising avenue to the achievement of desired nano-pillar-QNs system that facilitates the strong light-matter interaction due to both spectra and spatial coupling between the QNs and the cavity modes of a single pillar and the periodic pillars. PMID:27353231
Quantitative Biofractal Feedback Part II ’Devices, Scalability & Robust Control’
2008-05-01
in the modelling of proton exchange membrane fuel cells ( PEMFC ) may work as a powerful tool in the development and widespread testing of alternative...energy sources in the next decade [9], where biofractal controllers will be used to control these complex systems. The dynamic model of PEMFC , is...dynamic response of the PEMFC . In the Iftukhar model, the fuel cell is represented by an equivalent circuit, whose components are identified with
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quintana, John P.
This paper reports on the progress toward creating semi-autonomous motion control platforms for beamline applications using the iRobot Create registered platform. The goal is to create beamline research instrumentation where the motion paths are based on the local environment rather than position commanded from a control system, have low integration costs and also be scalable and easily maintainable.
Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications
NASA Astrophysics Data System (ADS)
Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei
2007-04-01
In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.
Scalable Quantum Networks for Distributed Computing and Sensing
2016-04-01
probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01
MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.
Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui
A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.
A federated capability-based access control mechanism for internet of things (IoTs)
NASA Astrophysics Data System (ADS)
Xu, Ronghua; Chen, Yu; Blasch, Erik; Chen, Genshe
2018-05-01
The prevalence of Internet of Things (IoTs) allows heterogeneous embedded smart devices to collaboratively provide intelligent services with or without human intervention. While leveraging the large-scale IoT-based applications like Smart Gird and Smart Cities, IoT also incurs more concerns on privacy and security. Among the top security challenges that IoTs face is that access authorization is critical in resource and information protection over IoTs. Traditional access control approaches, like Access Control Lists (ACL), Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC), are not able to provide a scalable, manageable and efficient mechanisms to meet requirement of IoT systems. The extraordinary large number of nodes, heterogeneity as well as dynamicity, necessitate more fine-grained, lightweight mechanisms for IoT devices. In this paper, a federated capability-based access control (FedCAC) framework is proposed to enable an effective access control processes to devices, services and information in large scale IoT systems. The federated capability delegation mechanism, based on a propagation tree, is illustrated for access permission propagation. An identity-based capability token management strategy is presented, which involves registering, propagation and revocation of the access authorization. Through delegating centralized authorization decision-making policy to local domain delegator, the access authorization process is locally conducted on the service provider that integrates situational awareness (SAW) and customized contextual conditions. Implemented and tested on both resources-constrained devices, like smart sensors and Raspberry PI, and non-resource-constrained devices, like laptops and smart phones, our experimental results demonstrate the feasibility of the proposed FedCAC approach to offer a scalable, lightweight and fine-grained access control solution to IoT systems connected to a system network.
Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.
Lu, Yang; Lee, Jong Ho; Chen, I-Wei
2017-08-31
Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.
Distributed controller clustering in software defined networks
Gani, Abdullah; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability. PMID:28384312
Stable scalable control of soliton propagation in broadband nonlinear optical waveguides
NASA Astrophysics Data System (ADS)
Peleg, Avner; Nguyen, Quan M.; Huynh, Toan T.
2017-02-01
We develop a method for achieving scalable transmission stabilization and switching of N colliding soliton sequences in optical waveguides with broadband delayed Raman response and narrowband nonlinear gain-loss. We show that dynamics of soliton amplitudes in N-sequence transmission is described by a generalized N-dimensional predator-prey model. Stability and bifurcation analysis for the predator-prey model are used to obtain simple conditions on the physical parameters for robust transmission stabilization as well as on-off and off-on switching of M out of N soliton sequences. Numerical simulations for single-waveguide transmission with a system of N coupled nonlinear Schrödinger equations with 2 ≤ N ≤ 4 show excellent agreement with the predator-prey model's predictions and stable propagation over significantly larger distances compared with other broadband nonlinear single-waveguide systems. Moreover, stable on-off and off-on switching of multiple soliton sequences and stable multiple transmission switching events are demonstrated by the simulations. We discuss the reasons for the robustness and scalability of transmission stabilization and switching in waveguides with broadband delayed Raman response and narrowband nonlinear gain-loss, and explain their advantages compared with other broadband nonlinear waveguides.
Embedded parallel processing based ground control systems for small satellite telemetry
NASA Technical Reports Server (NTRS)
Forman, Michael L.; Hazra, Tushar K.; Troendly, Gregory M.; Nickum, William G.
1994-01-01
The use of networked terminals which utilize embedded processing techniques results in totally integrated, flexible, high speed, reliable, and scalable systems suitable for telemetry and data processing applications such as mission operations centers (MOC). Synergies of these terminals, coupled with the capability of terminal to receive incoming data, allow the viewing of any defined display by any terminal from the start of data acquisition. There is no single point of failure (other than with network input) such as exists with configurations where all input data goes through a single front end processor and then to a serial string of workstations. Missions dedicated to NASA's ozone measurements program utilize the methodologies which are discussed, and result in a multimission configuration of low cost, scalable hardware and software which can be run by one flight operations team with low risk.
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
NASA Astrophysics Data System (ADS)
Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.
2014-06-01
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.
Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani
2017-04-01
Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.
SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures
Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani
2018-01-01
Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)
1997-01-01
The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
A molecular quantum spin network controlled by a single qubit.
Schlipf, Lukas; Oeckinghaus, Thomas; Xu, Kebiao; Dasari, Durga Bhaktavatsala Rao; Zappe, Andrea; de Oliveira, Felipe Fávaro; Kern, Bastian; Azarkh, Mykhailo; Drescher, Malte; Ternes, Markus; Kern, Klaus; Wrachtrup, Jörg; Finkler, Amit
2017-08-01
Scalable quantum technologies require an unprecedented combination of precision and complexity for designing stable structures of well-controllable quantum systems on the nanoscale. It is a challenging task to find a suitable elementary building block, of which a quantum network can be comprised in a scalable way. We present the working principle of such a basic unit, engineered using molecular chemistry, whose collective control and readout are executed using a nitrogen vacancy (NV) center in diamond. The basic unit we investigate is a synthetic polyproline with electron spins localized on attached molecular side groups separated by a few nanometers. We demonstrate the collective readout and coherent manipulation of very few (≤ 6) of these S = 1/2 electronic spin systems and access their direct dipolar coupling tensor. Our results show that it is feasible to use spin-labeled peptides as a resource for a molecular qubit-based network, while at the same time providing simple optical readout of single quantum states through NV magnetometry. This work lays the foundation for building arbitrary quantum networks using well-established chemistry methods, which has many applications ranging from mapping distances in single molecules to quantum information processing.
Conceptual Architecture for Obtaining Cyber Situational Awareness
2014-06-01
1-893723-17-8. [10] SKYBOX SECURITY. Developer´s Guide. Skybox View. Manual.Version 11. 2010. [11] SCALABLE Network. EXata communications...E. Understanding command and control. Washington, D.C.: CCRP Publication Series, 2006. 255 p. ISBN 1-893723-17-8. • [10] SKYBOX SECURITY. Developer...s Guide. Skybox View. Manual.Version 11. 2010. • [11] SCALABLE Network. EXata communications simulation platform. Available: <http://www.scalable
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
A scalable multi-DLP pico-projector system for virtual reality
NASA Astrophysics Data System (ADS)
Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.
2014-03-01
Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.
Layered Architectures for Quantum Computers and Quantum Repeaters
NASA Astrophysics Data System (ADS)
Jones, Nathan C.
This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.
Complete quantum control of exciton qubits bound to isoelectronic centres.
Éthier-Majcher, G; St-Jean, P; Boso, G; Tosi, A; Klem, J F; Francoeur, S
2014-05-30
In recent years, impressive demonstrations related to quantum information processing have been realized. The scalability of quantum interactions between arbitrary qubits within an array remains however a significant hurdle to the practical realization of a quantum computer. Among the proposed ideas to achieve fully scalable quantum processing, the use of photons is appealing because they can mediate long-range quantum interactions and could serve as buses to build quantum networks. Quantum dots or nitrogen-vacancy centres in diamond can be coupled to light, but the former system lacks optical homogeneity while the latter suffers from a low dipole moment, rendering their large-scale interconnection challenging. Here, through the complete quantum control of exciton qubits, we demonstrate that nitrogen isoelectronic centres in GaAs combine both the uniformity and predictability of atomic defects and the dipole moment of semiconductor quantum dots. This establishes isoelectronic centres as a promising platform for quantum information processing.
Design of modular control system for grain dryers
NASA Astrophysics Data System (ADS)
He, Gaoqing; Liu, Yanhua; Zu, Yuan
In order to effectively control the temperature of grain drying bin, grain ,air outlet as well as the grain moisture, it designed the control system of 5HCY-35 which is based on MCU to adapt to all grains drying conditions, high drying efficiency, long life usage and less manually. The system includes: the control module of the constant temperature and the temperature difference control in drying bin, the constant temperature control of heating furnace, on-line testing of moisture, variety of grain-circulation speed control and human-computer interaction interface. Spatial curve simulation, which takes moisture as control objectives, controls the constant temperature and the temperature difference in drying bin according to preset parameter by the user or a list to reduce the grains explosive to ensure the seed germination percentage. The system can realize the intelligent control of high efficiency and various drying, the good scalability and the high quality.
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Systems 2020: Strategic Initiative
2010-08-29
research areas that enable agile, assured, efficient, and scalable systems engineering approaches to support the development of these systems. This...To increase development efficiency and ensure flexible solutions in the field, systems engineers need powerful, agile, interoperable, and scalable...design and development will be transformed as a result of Systems 2020, along with complementary enabling acquisition practice improvements initiated in
Superlinearly scalable noise robustness of redundant coupled dynamical systems.
Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L
2016-03-01
We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John M.
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21PstP thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the 'Southwest Effect'). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21st thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the Southwest Effect). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
NPTool: Towards Scalability and Reliability of Business Process Management
NASA Astrophysics Data System (ADS)
Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton
Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.
Validation of a Scalable Solar Sailcraft
NASA Technical Reports Server (NTRS)
Murphy, D. M.
2006-01-01
The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.
Job Scheduling in a Heterogeneous Grid Environment
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak
2004-01-01
Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.
Scalable synthesis of sequence-defined, unimolecular macromolecules by Flow-IEG
Leibfarth, Frank A.; Johnson, Jeremiah A.; Jamison, Timothy F.
2015-01-01
We report a semiautomated synthesis of sequence and architecturally defined, unimolecular macromolecules through a marriage of multistep flow synthesis and iterative exponential growth (Flow-IEG). The Flow-IEG system performs three reactions and an in-line purification in a total residence time of under 10 min, effectively doubling the molecular weight of an oligomeric species in an uninterrupted reaction sequence. Further iterations using the Flow-IEG system enable an exponential increase in molecular weight. Incorporating a variety of monomer structures and branching units provides control over polymer sequence and architecture. The synthesis of a uniform macromolecule with a molecular weight of 4,023 g/mol is demonstrated. The user-friendly nature, scalability, and modularity of Flow-IEG provide a general strategy for the automated synthesis of sequence-defined, unimolecular macromolecules. Flow-IEG is thus an enabling tool for theory validation, structure–property studies, and advanced applications in biotechnology and materials science. PMID:26269573
Experimental realization of universal geometric quantum gates with solid-state spins.
Zu, C; Wang, W-B; He, L; Zhang, W-G; Dai, C-Y; Wang, F; Duan, L-M
2014-10-02
Experimental realization of a universal set of quantum logic gates is the central requirement for the implementation of a quantum computer. In an 'all-geometric' approach to quantum computation, the quantum gates are implemented using Berry phases and their non-Abelian extensions, holonomies, from geometric transformation of quantum states in the Hilbert space. Apart from its fundamental interest and rich mathematical structure, the geometric approach has some built-in noise-resilience features. On the experimental side, geometric phases and holonomies have been observed in thermal ensembles of liquid molecules using nuclear magnetic resonance; however, such systems are known to be non-scalable for the purposes of quantum computing. There are proposals to implement geometric quantum computation in scalable experimental platforms such as trapped ions, superconducting quantum bits and quantum dots, and a recent experiment has realized geometric single-bit gates in a superconducting system. Here we report the experimental realization of a universal set of geometric quantum gates using the solid-state spins of diamond nitrogen-vacancy centres. These diamond defects provide a scalable experimental platform with the potential for room-temperature quantum computing, which has attracted strong interest in recent years. Our experiment shows that all-geometric and potentially robust quantum computation can be realized with solid-state spin quantum bits, making use of recent advances in the coherent control of this system.
Medusa: A Scalable MR Console Using USB
Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.
2012-01-01
MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200
Final Report: CNC Micromachines LDRD No.10793
DOE Office of Scientific and Technical Information (OSTI.GOV)
JOKIEL JR., BERNHARD; BENAVIDES, GILBERT L.; BIEG, LOTHAR F.
2003-04-01
The three-year LDRD ''CNC Micromachines'' was successfully completed at the end of FY02. The project had four major breakthroughs in spatial motion control in MEMS: (1) A unified method for designing scalable planar and spatial on-chip motion control systems was developed. The method relies on the use of parallel kinematic mechanisms (PKMs) that when properly designed provide different types of motion on-chip without the need for post-fabrication assembly, (2) A new type of actuator was developed--the linear stepping track drive (LSTD) that provides open loop linear position control that is scalable in displacement, output force and step size. Several versionsmore » of this actuator were designed, fabricated and successfully tested. (3) Different versions of XYZ translation only and PTT motion stages were designed, successfully fabricated and successfully tested demonstrating absolutely that on-chip spatial motion control systems are not only possible, but are a reality. (4) Control algorithms, software and infrastructure based on MATLAB were created and successfully implemented to drive the XYZ and PTT motion platforms in a controlled manner. The control software is capable of reading an M/G code machine tool language file, decode the instructions and correctly calculate and apply position and velocity trajectories to the motion devices linear drive inputs to position the device platform along the trajectory as specified by the input file. A full and detailed account of design methodology, theory and experimental results (failures and successes) is provided.« less
An MPI-based MoSST core dynamics model
NASA Astrophysics Data System (ADS)
Jiang, Weiyuan; Kuang, Weijia
2008-09-01
Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath
The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less
GEECS (Generalized Equipment and Experiment Control System)
DOE Office of Scientific and Technical Information (OSTI.GOV)
GONSALVES, ANTHONY; DESHMUKH, AALHAD
2017-01-12
GEECS (Generalized Equipment and Experiment Control System) monitors and controls equipment distributed across a network, performs experiments by scanning input variables, and collects and stores various types of data synchronously from devices. Examples of devices include cameras, motors and pressure gauges. GEEKS is based upon LabView graphical object oriented programming (GOOP), allowing for a modular and scalable framework. Data is published for subscription of an arbitrary number of variables over TCP. A secondary framework allows easy development of graphical user interfaces for a combined control of any available devices on the control system without the need of programming knowledge. Thismore » allows for rapid integration of GEECS into a wide variety of systems. A database interface provides for devise and process configuration while allowing the user to save large quantities of data to local or network drives.« less
Disparity : scalable anomaly detection for clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, N.; Bradshaw, R.; Lusk, E.
2008-01-01
In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.
Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors
NASA Astrophysics Data System (ADS)
Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.
2016-12-01
The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less
Scalable Nanostructured Carbon Electrode Arrays for Enhanced Dopamine Detection.
Demuru, Silvia; Nela, Luca; Marchack, Nathan; Holmes, Steven J; Farmer, Damon B; Tulevski, George S; Lin, Qinghuang; Deligianni, Hariklia
2018-04-27
Dopamine is a neurotransmitter that modulates arousal and motivation in humans and animals. It plays a central role in the brain "reward" system. Its dysregulation is involved in several debilitating disorders such as addiction, depression, Parkinson's disease, and schizophrenia. Dopamine neurotransmission and its reuptake in extracellular space takes place with millisecond temporal and nanometer spatial resolution. Novel nanoscale electrodes are needed with superior sensitivity and improved spatial resolution to gain an improved understanding of dopamine dysregulation. We report on a scalable fabrication of dopamine neurochemical probes of a nanostructured glassy carbon that is smaller than any existing dopamine sensor and arrays of more than 6000 nanorod probes. We also report on the electrochemical dopamine sensing of the glassy carbon nanorod electrode. Compared with a carbon fiber, the nanostructured glassy carbon nanorods provide about 2× higher sensitivity per unit area for dopamine sensing and more than 5× higher signal per unit area at low concentration of dopamine, with comparable LOD and time response. These glassy carbon nanorods were fabricated by pyrolysis of a lithographically defined polymeric nanostructure with an industry standard semiconductor fabrication infrastructure. The scalable fabrication strategy offers the potential to integrate these nanoscale carbon rods with an integrated circuit control system and with other complementary metal oxide semiconductor (CMOS) compatible sensors.
Coordinated Transformation among Community Colleges Lacking a State System
ERIC Educational Resources Information Center
Russell, James Thad
2016-01-01
Community colleges face many challenges in the face of demands for increased student success. Institutions continually seek scalable interventions and initiatives focused on improving student achievement. Effectively implementing sustainable change that moves the needle of student success remains elusive. Facilitating systemic, scalable change…
Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II
2011-09-01
for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
NASA Astrophysics Data System (ADS)
Culp, Robert D.; McQuerry, James P.
1991-07-01
The present conference on guidance and control encompasses advances in guidance, navigation, and control, storyboard displays, approaches to space-borne pointing control, international space programs, recent experiences with systems, and issues regarding navigation in the low-earth-orbit space environment. Specific issues addressed include a scalable architecture for an operational spaceborne autonavigation system, the mitigation of multipath error in GPS-based attitude determination, microgravity flight testing of a laboratory robot, and the application of neural networks. Other issues addressed include image navigation with second-generation Meteosat, Magellan star-scanner experiences, high-precision control systems for telescopes and interferometers, gravitational effects on low-earth orbiters, experimental verification of nanometer-level optical pathlengths, and a flight telerobotic servicer prototype simulator. (For individual items see A93-15577 to A93-15613)
Electrical control of a solid-state flying qubit.
Yamamoto, Michihisa; Takada, Shintaro; Bäuerle, Christopher; Watanabe, Kenta; Wieck, Andreas D; Tarucha, Seigo
2012-03-18
Solid-state approaches to quantum information technology are attractive because they are scalable. The coherent transport of quantum information over large distances is a requirement for any practical quantum computer and has been demonstrated by coupling super-conducting qubits to photons. Single electrons have also been transferred between distant quantum dots in times shorter than their spin coherence time. However, until now, there have been no demonstrations of scalable 'flying qubit' architectures-systems in which it is possible to perform quantum operations on qubits while they are being coherently transferred-in solid-state systems. These architectures allow for control over qubit separation and for non-local entanglement, which makes them more amenable to integration and scaling than static qubit approaches. Here, we report the transport and manipulation of qubits over distances of 6 µm within 40 ps, in an Aharonov-Bohm ring connected to two-channel wires that have a tunable tunnel coupling between channels. The flying qubit state is defined by the presence of a travelling electron in either channel of the wire, and can be controlled without a magnetic field. Our device has shorter quantum gates (<1 µm), longer coherence lengths (∼86 µm at 70 mK) and higher operating frequencies (∼100 GHz) than other solid-state implementations of flying qubits.
Entangling spin-spin interactions of ions in individually controlled potential wells
NASA Astrophysics Data System (ADS)
Wilson, Andrew; Colombe, Yves; Brown, Kenton; Knill, Emanuel; Leibfried, Dietrich; Wineland, David
2014-03-01
Physical systems that cannot be modeled with classical computers appear in many different branches of science, including condensed-matter physics, statistical mechanics, high-energy physics, atomic physics and quantum chemistry. Despite impressive progress on the control and manipulation of various quantum systems, implementation of scalable devices for quantum simulation remains a formidable challenge. As one approach to scalability in simulation, here we demonstrate an elementary building-block of a configurable quantum simulator based on atomic ions. Two ions are trapped in separate potential wells that can individually be tailored to emulate a number of different spin-spin couplings mediated by the ions' Coulomb interaction together with classical laser and microwave fields. We demonstrate deterministic tuning of this interaction by independent control of the local wells and emulate a particular spin-spin interaction to entangle the internal states of the two ions with 0.81(2) fidelity. Extension of the building-block demonstrated here to a 2D-network, which ion-trap micro-fabrication processes enable, may provide a new quantum simulator architecture with broad flexibility in designing and scaling the arrangement of ions and their mutual interactions. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), ONR, and the NIST Quantum Information Program.
Scalable and Manageable Storage Systems
2000-12-01
Despite our long- distance relationship, my brothers and sisters, Charfeddine, Amel, Ghazi, Hajer, Nabeel , and Ines overwhelmed me with more love and...that enable storage sys - tems to be more cost-effectively scalable. Furthermore, the dissertation proposes an approach to ensure automatic load...and addresses three key technical challenges to making storage sys - tems more cost-effectively scalable and manageable. 1.2 Dissertation research The
Temporally Scalable Visual SLAM using a Reduced Pose Graph
2012-05-25
m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use
The deployment of routing protocols in distributed control plane of SDN.
Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu
2014-01-01
Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies.
Low-cost scalable quartz crystal microbalance array for environmental sensing
NASA Astrophysics Data System (ADS)
Muckley, Eric S.; Anazagasty, Cristain; Jacobs, Christopher B.; Hianik, Tibor; Ivanov, Ilia N.
2016-09-01
Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution ( 1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.
Scale-Free Networks and Commercial Air Carrier Transportation in the United States
NASA Technical Reports Server (NTRS)
Conway, Sheila R.
2004-01-01
Network science, or the art of describing system structure, may be useful for the analysis and control of large, complex systems. For example, networks exhibiting scale-free structure have been found to be particularly well suited to deal with environmental uncertainty and large demand growth. The National Airspace System may be, at least in part, a scalable network. In fact, the hub-and-spoke structure of the commercial segment of the NAS is an often-cited example of an existing scale-free network After reviewing the nature and attributes of scale-free networks, this assertion is put to the test: is commercial air carrier transportation in the United States well explained by this model? If so, are the positive attributes of these networks, e.g. those of efficiency, flexibility and robustness, fully realized, or could we effect substantial improvement? This paper first outlines attributes of various network types, then looks more closely at the common carrier air transportation network from perspectives of the traveler, the airlines, and Air Traffic Control (ATC). Network models are applied within each paradigm, including discussion of implied strengths and weaknesses of each model. Finally, known limitations of scalable networks are discussed. With an eye towards NAS operations, utilizing the strengths and avoiding the weaknesses of scale-free networks are addressed.
Ikeda, Kazuhiro; Nagata, Shogo; Okitsu, Teru; Takeuchi, Shoji
2017-06-06
Human pluripotent stem cells are a potentially powerful cellular resource for application in regenerative medicine. Because such applications require large numbers of human pluripotent stem cell-derived cells, a scalable culture system of human pluripotent stem cell needs to be developed. Several suspension culture systems for human pluripotent stem cell expansion exist; however, it is difficult to control the thickness of cell aggregations in these systems, leading to increased cell death likely caused by limited diffusion of gases and nutrients into the aggregations. Here, we describe a scalable culture system using the cell fiber technology for the expansion of human induced pluripotent stem (iPS) cells. The cells were encapsulated and cultured within the core region of core-shell hydrogel microfibers, resulting in the formation of rod-shaped or fiber-shaped cell aggregations with sustained thickness and high viability. By encapsulating the cells with type I collagen, we demonstrated a long-term culture of the cells by serial passaging at a high expansion rate (14-fold in four days) while retaining its pluripotency. Therefore, our culture system could be used for large-scale expansion of human pluripotent stem cells for use in regenerative medicine.
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
The Quantum Socket: Wiring for Superconducting Qubits - Part 3
NASA Astrophysics Data System (ADS)
Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.
The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.
High-speed and high-fidelity system and method for collecting network traffic
Weigle, Eric H [Los Alamos, NM
2010-08-24
A system is provided for the high-speed and high-fidelity collection of network traffic. The system can collect traffic at gigabit-per-second (Gbps) speeds, scale to terabit-per-second (Tbps) speeds, and support additional functions such as real-time network intrusion detection. The present system uses a dedicated operating system for traffic collection to maximize efficiency, scalability, and performance. A scalable infrastructure and apparatus for the present system is provided by splitting the work performed on one host onto multiple hosts. The present system simultaneously addresses the issues of scalability, performance, cost, and adaptability with respect to network monitoring, collection, and other network tasks. In addition to high-speed and high-fidelity network collection, the present system provides a flexible infrastructure to perform virtually any function at high speeds such as real-time network intrusion detection and wide-area network emulation for research purposes.
An Extensible Sensing and Control Platform for Building Energy Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Anthony; Berges, Mario; Martin, Christopher
2016-04-03
The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.
Core Flight System (cFS) a Low Cost Solution for SmallSats
NASA Technical Reports Server (NTRS)
McComas, David; Strege, Susanne; Wilmot, Jonathan
2015-01-01
The cFS is a FSW product line that uses a layered architecture and compile-time configuration parameters which make it portable and scalable for a wide range of platforms. The software layers that defined the application run-time environment are now under a NASA-wide configuration control board with the goal of sustaining an open-source application ecosystem.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lillibridge, Sean; Navarro, Moses
2011-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recovering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TradeMark) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested, namely MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lilibridge, Sean T.; Navarro, Moses
2012-01-01
Freezable Radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft?s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recov ering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TM) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested: MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
Scalable, full-colour and controllable chromotropic plasmonic printing
Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua
2015-01-01
Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803
Scalable, full-colour and controllable chromotropic plasmonic printing.
Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua
2015-11-16
Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization.
A Scalability Model for ECS's Data Server
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Singhal, Mukesh
1998-01-01
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
A Transparently-Scalable Metadata Service for the Ursa Minor Storage System
2010-06-25
provide application-level guarantees. For example, many document editing programs imple- ment atomic updates by writing the new document ver- sion into a...Transparently-Scalable Metadata Service for the Ursa Minor Storage System 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...operations that could involve multiple servers, how close existing systems come to transparent scala - bility, how systems that handle multi-server
A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing
NASA Astrophysics Data System (ADS)
Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.
2017-12-01
Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D; Shende, Sameer
This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation
On-chip detection of non-classical light by scalable integration of single-photon detectors
Najafi, Faraz; Mower, Jacob; Harris, Nicholas C.; Bellei, Francesco; Dane, Andrew; Lee, Catherine; Hu, Xiaolong; Kharel, Prashanta; Marsili, Francesco; Assefa, Solomon; Berggren, Karl K.; Englund, Dirk
2015-01-01
Photonic-integrated circuits have emerged as a scalable platform for complex quantum systems. A central goal is to integrate single-photon detectors to reduce optical losses, latency and wiring complexity associated with off-chip detectors. Superconducting nanowire single-photon detectors (SNSPDs) are particularly attractive because of high detection efficiency, sub-50-ps jitter and nanosecond-scale reset time. However, while single detectors have been incorporated into individual waveguides, the system detection efficiency of multiple SNSPDs in one photonic circuit—required for scalable quantum photonic circuits—has been limited to <0.2%. Here we introduce a micrometer-scale flip-chip process that enables scalable integration of SNSPDs on a range of photonic circuits. Ten low-jitter detectors are integrated on one circuit with 100% device yield. With an average system detection efficiency beyond 10%, and estimated on-chip detection efficiency of 14–52% for four detectors operated simultaneously, we demonstrate, to the best of our knowledge, the first on-chip photon correlation measurements of non-classical light. PMID:25575346
Modeling and Simulation for an 8 kW Three-Phase Grid-Connected Photo-Voltaic Power System
NASA Astrophysics Data System (ADS)
Cen, Zhaohui
2017-09-01
Gird-connected Photo-Voltaic (PV) systems rated as 5-10 kW level have advantages of scalability and energy-saving, so they are very typical for small-scale household solar applications. In this paper, an 8 kW three-phase grid-connected PV system model is proposed and studied. In this high-fidelity model, some basic PV system components such as solar panels, DC-DC converters, DC-AC inverters and three-phase utility grids are mathematically modelled and organized as a complete simulation model. Also, an overall power controller with Maximum Power Point Control (MPPT) is proposed to achieve both high-efficiency for solar energy harvesting and grid-connection stability. Finally, simulation results demonstrate the effectiveness of the PV system model and the proposed controller, and power quality issues are discussed.
Yan Wei, Xiao; Kuang, Shuang Yang; Yang Li, Hua; Pan, Caofeng; Zhu, Guang; Wang, Zhong Lin
2015-01-01
Self-powered system that is interface-free is greatly desired for area-scalable application. Here we report a self-powered electroluminescent system that consists of a triboelectric generator (TEG) and a thin-film electroluminescent (TFEL) lamp. The TEG provides high-voltage alternating electric output, which fits in well with the needs of the TFEL lamp. Induced charges pumped onto the lamp by the TEG generate an electric field that is sufficient to excite luminescence without an electrical interface circuit. Through rational serial connection of multiple TFEL lamps, effective and area-scalable luminescence is realized. It is demonstrated that multiple types of TEGs are applicable to the self-powered system, indicating that the system can make use of diverse mechanical sources and thus has potentially broad applications in illumination, display, entertainment, indication, surveillance and many others. PMID:26338365
Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics
2017-04-19
enforcement . The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance...research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...cameras as video sources. The architectural considerations of this system are presented. Issues to be reckoned with in implementing a scalable
Scalable graphene production: perspectives and challenges of plasma applications
NASA Astrophysics Data System (ADS)
Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth
2016-05-01
Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.
Scalable graphene production: perspectives and challenges of plasma applications.
Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth
2016-05-19
Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.
Shen, Daozhi; Zou, Guisheng; Liu, Lei; Zhao, Wenzheng; Wu, Aiping; Duley, Walter W; Zhou, Y Norman
2018-02-14
Miniaturization of energy storage devices can significantly decrease the overall size of electronic systems. However, this miniaturization is limited by the reduction of electrode dimensions and the reproducible transfer of small electrolyte drops. This paper reports first a simple scalable direct writing method for the production of ultraminiature microsupercapacitor (MSC) electrodes, based on femtosecond laser reduced graphene oxide (fsrGO) interlaced pads. These pads, separated by 2 μm spacing, are 100 μm long and 8 μm wide. A second stage involves the accurate transfer of an electrolyte microdroplet on top of each individual electrode, which can avoid any interference of the electrolyte with other electronic components. Abundant in-plane mesopores in fsrGO induced by a fs laser together with ultrashort interelectrode spacing enables MSCs to exhibit a high specific capacitance (6.3 mF cm -2 and 105 F cm -3 ) and ∼100% retention after 1000 cycles. An all graphene resistor-capacitor (RC) filter is also constructed by combining the MSC and a fsrGO resistor, which is confirmed to exhibit highly enhanced performance characteristics. This new hybrid technique combining fs laser direct writing and precise microdroplet transfer easily enables scalable production of ultraminiature MSCs, which is believed to be significant for practical application of micro-supercapacitor microelectronic systems.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
A flexible architecture for advanced process control solutions
NASA Astrophysics Data System (ADS)
Faron, Kamyar; Iourovitski, Ilia
2005-05-01
Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue Control Technologies has developed an advance service oriented architecture Run to Run Control System which addresses these requirements.
MOEMs, key optical components for future astronomical instrumentation in space
NASA Astrophysics Data System (ADS)
Zamkotsian, Frédéric; Dohlen, Kjetil; Burgarella, Denis; Ferrari, Marc; Buat, Veronique
2017-11-01
Based on the micro-electronics fabrication process, MicroOpto-Electro-Mechanical Systems (MOEMS) are under study, in order to be integrated in next-generation astronomical instruments and telescopes, especially for space missions. The main advantages of micro-optical components are their compactness, scalability, specific task customization using elementary building blocks, and they allows remote control. As these systems are easily replicable, the price of the components is decreasing dramatically when their number is increasing. The two major applications of MOEMS are Multi-Object Spectroscopy masks and Deformable Mirror systems.
The Deployment of Routing Protocols in Distributed Control Plane of SDN
Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu
2014-01-01
Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies. PMID:25250395
Flexible Multi agent Algorithm for Distributed Decision Making
2015-01-01
How, J. P. Consensus - Based Auction Approaches for Decentralized task Assignment. Proceedings of the AIAA Guidance, Navigation, and Control...G. ; Kim, Y. Market- based Decentralized Task Assignment for Cooperative UA V Mission Including Rendezvous. Proceedings of the AIAA Guidance...scalable and adaptable to a variety of specific mission tasks . Additionally, the algorithm could easily be adapted for use on land or sea- based systems
NASA Astrophysics Data System (ADS)
Bianco, M.; Martoiu, S.; Sidiropoulou, O.; Zibell, A.
2015-12-01
A Micromegas (MM) quadruplet prototype with an active area of 0.5 m2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Software, a dedicated Micromegas segment has been implemented, in order to include the detector inside the main ATLAS DAQ partition. A full set of tests, on the hardware and software aspects, is presented.
Simultaneous deterministic control of distant qubits in two semiconductor quantum dots.
Gamouras, A; Mathew, R; Freisem, S; Deppe, D G; Hall, K C
2013-10-09
In optimal quantum control (OQC), a target quantum state of matter is achieved by tailoring the phase and amplitude of the control Hamiltonian through femtosecond pulse-shaping techniques and powerful adaptive feedback algorithms. Motivated by recent applications of OQC in quantum information science as an approach to optimizing quantum gates in atomic and molecular systems, here we report the experimental implementation of OQC in a solid-state system consisting of distinguishable semiconductor quantum dots. We demonstrate simultaneous high-fidelity π and 2π single qubit gates in two different quantum dots using a single engineered infrared femtosecond pulse. These experiments enhance the scalability of semiconductor-based quantum hardware and lay the foundation for applications of pulse shaping to optimize quantum gates in other solid-state systems.
Serving ocean model data on the cloud
Meisinger, Michael; Farcas, Claudiu; Farcas, Emilia; Alexander, Charles; Arrott, Matthew; de La Beaujardiere, Jeff; Hubbard, Paul; Mendelssohn, Roy; Signell, Richard P.
2010-01-01
The NOAA-led Integrated Ocean Observing System (IOOS) and the NSF-funded Ocean Observatories Initiative Cyberinfrastructure Project (OOI-CI) are collaborating on a prototype data delivery system for numerical model output and other gridded data using cloud computing. The strategy is to take an existing distributed system for delivering gridded data and redeploy on the cloud, making modifications to the system that allow it to harness the scalability of the cloud as well as adding functionality that the scalability affords.
Kuethe, Jeffrey T; Basu, Kallol; Orr, Robert K; Ashley, Eric; Poirier, Marc; Tan, Lushi
2018-02-15
The evolution of a scalable process for the preparation of methylcyclobutanol-pyridyl ether 1 is described. Key aspects of this development including careful control of the stereochemistry, elimination of chromatography, and application to kilogram-scale synthesis are addressed. Copyright © 2017 Elsevier Ltd. All rights reserved.
PLGA-lecithin-PEG core-shell nanoparticles for controlled drug delivery.
Chan, Juliana M; Zhang, Liangfang; Yuet, Kai P; Liao, Grace; Rhee, June-Wha; Langer, Robert; Farokhzad, Omid C
2009-03-01
Current approaches to encapsulate and deliver therapeutic compounds have focused on developing liposomal and biodegradable polymeric nanoparticles (NPs), resulting in clinically approved therapeutics such as Doxil/Caelyx and Genexol-PM, respectively. Our group recently reported the development of biodegradable core-shell NP systems that combined the beneficial properties of liposomal and polymeric NPs for controlled drug delivery. Herein we report the parameters that alter the biological and physicochemical characteristics, stability, drug release properties and cytotoxicity of these core-shell NPs. We further define scalable processes for the formulation of these NPs in a reproducible manner. These core-shell NPs consist of (i) a poly(D,L-lactide-co-glycolide) hydrophobic core, (ii) a soybean lecithin monolayer, and (iii) a poly(ethylene glycol) shell, and were synthesized by a modified nanoprecipitation method combined with self-assembly. Preparation of the NPs showed that various formulation parameters such as the lipid/polymer mass ratio and lipid/lipid-PEG molar ratio controlled NP physical stability and size. We encapsulated a model chemotherapy drug, docetaxel, in the NPs and showed that the amount of lipid coverage affected its drug release kinetics. Next, we demonstrated a potentially scalable process for the formulation, purification, and storage of NPs. Finally, we tested the cytotoxicity using MTT assays on two model human cell lines, HeLa and HepG2, and demonstrated the biocompatibility of these particles in vitro. Our data suggest that the PLGA-lecithin-PEG core-shell NPs may be a useful new controlled release drug delivery system.
Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.
Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann
2015-01-01
Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Decentralized control of sound radiation using iterative loop recovery.
Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R
2010-10-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Decentralized Control of Sound Radiation Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2009-01-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Effect of asynchrony on numerical simulations of fluid flow phenomena
NASA Astrophysics Data System (ADS)
Konduri, Aditya; Mahoney, Bryan; Donzis, Diego
2015-11-01
Designing scalable CFD codes on massively parallel computers is a challenge. This is mainly due to the large number of communications between processing elements (PEs) and their synchronization, leading to idling of PEs. Indeed, communication will likely be the bottleneck in the scalability of codes on Exascale machines. Our recent work on asynchronous computing for PDEs based on finite-differences has shown that it is possible to relax synchronization between PEs at a mathematical level. Computations then proceed regardless of the status of communication, reducing the idle time of PEs and improving the scalability. However, accuracy of the schemes is greatly affected. We have proposed asynchrony-tolerant (AT) schemes to address this issue. In this work, we study the effect of asynchrony on the solution of fluid flow problems using standard and AT schemes. We show that asynchrony creates additional scales with low energy content. The specific wavenumbers affected can be shown to be due to two distinct effects: the randomness in the arrival of messages and the corresponding switching between schemes. Understanding these errors allow us to effectively control them, rendering the method's feasibility in solving turbulent flows at realistic conditions on future computing systems.
Level-2 Milestone 3504: Scalable Applications Preparations and Outreach for the Sequoia ID (Dawn)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futral, W. Scott; Gyllenhaal, John C.; Hedges, Richard M.
2010-07-02
This report documents LLNL SAP project activities in anticipation of the ASC Sequoia system, ASC L2 milestone 3504: Scalable Applications Preparations and Outreach for the Sequoia ID (Dawn), due June 30, 2010.
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.
Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, M.; Cafagna, F.; Fiergolski, A.; Radicioni, E.
2013-11-01
The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality.
An Efficient, Scalable and Robust P2P Overlay for Autonomic Communication
NASA Astrophysics Data System (ADS)
Li, Deng; Liu, Hui; Vasilakos, Athanasios
The term Autonomic Communication (AC) refers to self-managing systems which are capable of supporting self-configuration, self-healing and self-optimization. However, information reflection and collection, lack of centralized control, non-cooperation and so on are just some of the challenges within AC systems. Since many self-* properties (e.g. selfconfiguration, self-optimization, self-healing, and self-protecting) are achieved by a group of autonomous entities that coordinate in a peer-to-peer (P2P) fashion, it has opened the door to migrating research techniques from P2P systems. P2P's meaning can be better understood with a set of key characteristics similar to AC: Decentralized organization, Self-organizing nature (i.e. adaptability), Resource sharing and aggregation, and Fault-tolerance. However, not all P2P systems are compatible with AC. Unstructured systems are designed more specifically than structured systems for the heterogeneous Internet environment, where the nodes' persistence and availability are not guaranteed. Motivated by the challenges in AC and based on comprehensive analysis of popular P2P applications, three correlative standards for evaluating the compatibility of a P2P system with AC are presented in this chapter. According to these standards, a novel Efficient, Scalable and Robust (ESR) P2P overlay is proposed. Differing from current structured and unstructured, or meshed and tree-like P2P overlay, the ESR is a whole new three dimensional structure to improve the efficiency of routing, while information exchanges take in immediate neighbors with local information to make the system scalable and fault-tolerant. Furthermore, rather than a complex game theory or incentive mechanism, asimple but effective punish mechanism has been presented based on a new ID structure which can guarantee the continuity of each node's record in order to discourage negative behavior on an autonomous environment as AC.
Generic, scalable and decentralized fault detection for robot swarms.
Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon
2017-01-01
Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.
Processing Diabetes Mellitus Composite Events in MAGPIE.
Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael
2016-02-01
The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.
Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...
2015-01-01
We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less
Designing of smart home automation system based on Raspberry Pi
NASA Astrophysics Data System (ADS)
Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar; Wattanawisuth, Nattapol; Leeprechanon, Nopbhorn
2016-03-01
Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pins of Raspberry Pi by pressing the corresponding key for turning "on" and "off" of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.
Designing of smart home automation system based on Raspberry Pi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar
Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pinsmore » of Raspberry Pi by pressing the corresponding key for turning “on” and “off” of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.« less
AEGIS: a robust and scalable real-time public health surveillance system.
Reis, Ben Y; Kirby, Chaim; Hadden, Lucy E; Olson, Karen; McMurry, Andrew J; Daniel, James B; Mandl, Kenneth D
2007-01-01
In this report, we describe the Automated Epidemiological Geotemporal Integrated Surveillance system (AEGIS), developed for real-time population health monitoring in the state of Massachusetts. AEGIS provides public health personnel with automated near-real-time situational awareness of utilization patterns at participating healthcare institutions, supporting surveillance of bioterrorism and naturally occurring outbreaks. As real-time public health surveillance systems become integrated into regional and national surveillance initiatives, the challenges of scalability, robustness, and data security become increasingly prominent. A modular and fault tolerant design helps AEGIS achieve scalability and robustness, while a distributed storage model with local autonomy helps to minimize risk of unauthorized disclosure. The report includes a description of the evolution of the design over time in response to the challenges of a regional and national integration environment.
Generic, scalable and decentralized fault detection for robot swarms
Christensen, Anders Lyhne; Timmis, Jon
2017-01-01
Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756
Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications
NASA Astrophysics Data System (ADS)
Watanabe, M.; Nakamura, A.; Kunii, A.; Kusano, K.; Futagawa, M.
2015-12-01
A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture.
Revisiting control establishments for emerging energy hubs
NASA Astrophysics Data System (ADS)
Nasirian, Vahidreza
Emerging small-scale energy systems, i.e., microgrids and smartgrids, rely on centralized controllers for voltage regulation, load sharing, and economic dispatch. However, the central controller is a single-point-of-failure in such a design as either the controller or attached communication links failure can render the entire system inoperable. This work seeks for alternative distributed control structures to improve system reliability and help to the scalability of the system. A cooperative distributed controller is proposed that uses a noise-resilient voltage estimator and handles global voltage regulation and load sharing across a DC microgrid. Distributed adaptive droop control is also investigated as an alternative solution. A droop-free distributed control is offered to handle voltage/frequency regulation and load sharing in AC systems. This solution does not require frequency measurement and, thus, features a fast frequency regulation. Distributed economic dispatch is also studied, where a distributed protocol is designed that controls generation units to merge their incremental costs into a consensus and, thus, push the entire system to generate with the minimum cost. Experimental verifications and Hardware-in-the-Loop (HIL) simulations are used to study efficacy of the proposed control protocols.
Scalability enhancement of AODV using local link repairing
NASA Astrophysics Data System (ADS)
Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.
2014-09-01
Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.
2004-10-01
MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory
NASA Technical Reports Server (NTRS)
Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.
2003-01-01
This paper describes scalability issues of evolutionary-driven automatic synthesis of electronic circuits. The article begins by reviewing the concepts of circuit evolution and discussing the limitations of this technique when trying to achieve more complex systems.
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
Simulating advanced life support systems to test integrated control approaches
NASA Astrophysics Data System (ADS)
Kortenkamp, D.; Bell, S.
Simulations allow for testing of life support control approaches before hardware is designed and built. Simulations also allow for the safe exploration of alternative control strategies during life support operation. As such, they are an important component of any life support research program and testbed. This paper describes a specific advanced life support simulation being created at NASA Johnson Space Center. It is a discrete-event simulation that is dynamic and stochastic. It simulates all major components of an advanced life support system, including crew (with variable ages, weights and genders), biomass production (with scalable plantings of ten different crops), water recovery, air revitalization, food processing, solid waste recycling and energy production. Each component is modeled as a producer of certain resources and a consumer of certain resources. The control system must monitor (via sensors) and control (via actuators) the flow of resources throughout the system to provide life support functionality. The simulation is written in an object-oriented paradigm that makes it portable, extensible and reconfigurable.
An open-source, extensible system for laboratory timing and control
NASA Astrophysics Data System (ADS)
Gaskell, Peter E.; Thorn, Jeremy J.; Alba, Sequoia; Steck, Daniel A.
2009-11-01
We describe a simple system for timing and control, which provides control of analog, digital, and radio-frequency signals. Our system differs from most common laboratory setups in that it is open source, built from off-the-shelf components, synchronized to a common and accurate clock, and connected over an Ethernet network. A simple bus architecture facilitates creating new and specialized devices with only moderate experience in circuit design. Each device operates independently, requiring only an Ethernet network connection to the controlling computer, a clock signal, and a trigger signal. This makes the system highly robust and scalable. The devices can all be connected to a single external clock, allowing synchronous operation of a large number of devices for situations requiring precise timing of many parallel control and acquisition channels. Provided an accurate enough clock, these devices are capable of triggering events separated by one day with near-microsecond precision. We have achieved precisions of ˜0.1 ppb (parts per 109) over 16 s.
A Laboratory for Characterizing the Efficacy of Moving Target Defense
2016-10-25
of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable network...goal with the resource constraints of a small number of servers, and making virtual nodes “real enough” from the view of attackers. Unfortunately, with...we at College of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable
Photoignition Torch Applied to Cryogenic H2/O2 Coaxial Jet
2016-12-06
suitable for certain thrusters and liquid rocket engines. This ignition system is scalable for applications in different combustion chambers such as gas ...turbines, gas generators, liquid rocket engines, and multi grain solid rocket motors. photoignition, fuel spray ignition, high pressure ignition...thrusters and liquid rocket engines. This ignition system is scalable for applications in different combustion chambers such as gas turbines, gas
Toward cost-effective solar energy use.
Lewis, Nathan S
2007-02-09
At present, solar energy conversion technologies face cost and scalability hurdles in the technologies required for a complete energy system. To provide a truly widespread primary energy source, solar energy must be captured, converted, and stored in a cost-effective fashion. New developments in nanotechnology, biotechnology, and the materials and physical sciences may enable step-change approaches to cost-effective, globally scalable systems for solar energy use.
Arrays of individually controlled ions suitable for two-dimensional quantum simulations
Mielenz, Manuel; Kalis, Henning; Wittemer, Matthias; Hakelberg, Frederick; Warring, Ulrich; Schmied, Roman; Blain, Matthew; Maunz, Peter; Moehring, David L.; Leibfried, Dietrich; Schaetz, Tobias
2016-01-01
A precisely controlled quantum system may reveal a fundamental understanding of another, less accessible system of interest. A universal quantum computer is currently out of reach, but an analogue quantum simulator that makes relevant observables, interactions and states of a quantum model accessible could permit insight into complex dynamics. Several platforms have been suggested and proof-of-principle experiments have been conducted. Here, we operate two-dimensional arrays of three trapped ions in individually controlled harmonic wells forming equilateral triangles with side lengths 40 and 80 μm. In our approach, which is scalable to arbitrary two-dimensional lattices, we demonstrate individual control of the electronic and motional degrees of freedom, preparation of a fiducial initial state with ion motion close to the ground state, as well as a tuning of couplings between ions within experimental sequences. Our work paves the way towards a quantum simulator of two-dimensional systems designed at will. PMID:27291425
A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.
Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel
2016-03-09
In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.
A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells
Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel
2016-01-01
In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630
Launch Control Systems: Moving Towards a Scalable, Universal Platform for Future Space Endeavors
NASA Technical Reports Server (NTRS)
Sun, Jonathan
2011-01-01
The redirection of NASA away from the Constellation program calls for heavy reliance on commercial launch vehicles for the near future in order to reduce costs and shift focus to research and long term space exploration. To support them, NASA will renovate Kennedy Space Center's launch facilities and make them available for commercial use. However, NASA's current launch software is deeply connected with the now-retired Space Shuttle and is otherwise not massively compatible. Therefore, a new Launch Control System must be designed that is adaptable to a variety of different launch protocols and vehicles. This paper exposits some of the features and advantages of the new system both from the perspective of the software developers and the launch engineers.
Optimized autonomous space in-situ sensor web for volcano monitoring
Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.
2010-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.
Motivation and Design of the Sirocco Storage System Version 1.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Matthew Leon; Ward, H. Lee; Danielson, Geoffrey Charles
Sirocco is a massively parallel, high performance storage system for the exascale era. It emphasizes client-to-client coordination, low server-side coupling, and free data movement to improve resilience and performance. Its architecture is inspired by peer-to-peer and victim- cache architectures. By leveraging these ideas, Sirocco natively supports several media types, including RAM, flash, disk, and archival storage, with automatic migration between levels. Sirocco also includes storage interfaces and support that are more advanced than typical block storage. Sirocco enables clients to efficiently use key-value storage or block-based storage with the same interface. It also provides several levels of transactional data updatesmore » within a single storage command, including full ACID-compliant updates. This transaction support extends to updating several objects within a single transaction. Further support is provided for con- currency control, enabling greater performance for workloads while providing safe concurrent modification. By pioneering these and other technologies and techniques in the storage system, Sirocco is poised to fulfill a need for a massively scalable, write-optimized storage system for exascale systems. This is version 1.0 of a document reflecting the current and planned state of Sirocco. Further versions of this document will be accessible at http://www.cs.sandia.gov/Scalable_IO/ sirocco .« less
Design of an H.264/SVC resilient watermarking scheme
NASA Astrophysics Data System (ADS)
Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter
2010-01-01
The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.
NASA Astrophysics Data System (ADS)
Wang, Shengtao
The ability to precisely and coherently control atomic systems has improved dramatically in the last two decades, driving remarkable advancements in quantum computation and simulation. In recent years, atomic and atom-like systems have also been served as a platform to study topological phases of matter and non-equilibrium many-body physics. Integrated with rapid theoretical progress, the employment of these systems is expanding the realm of our understanding on a range of physical phenomena. In this dissertation, I draw on state-of-the-art experimental technology to develop several new ideas for controlling and applying atomic systems. In the first part of this dissertation, we propose several novel schemes to realize, detect, and probe topological phases in atomic and atom-like systems. We first theoretically study the intriguing properties of Hopf insulators, a peculiar type of topological insulators beyond the standard classification paradigm of topological phases. Using a solid-state quantum simulator, we report the first experimental observation of Hopf insulators. We demonstrate the Hopf fibration with fascinating topological links in the experiment, showing clear signals of topological phase transitions for the underlying Hamiltonian. Next, we propose a feasible experimental scheme to realize the chiral topological insulator in three dimensions. They are a type of topological insulators protected by the chiral symmetry and have thus far remained unobserved in experiment. We then introduce a method to directly measure topological invariants in cold-atom experiments. This detection scheme is general and applicable to probe of different topological insulators in any spatial dimension. In another study, we theoretically discover a new type of topological gapless rings, dubbed a Weyl exceptional ring, in three-dimensional dissipative cold atomic systems. In the second part of this dissertation, we focus on the application of atomic systems in quantum computation and simulation. Trapped atomic ions are one of the leading platforms to build a scalable, universal quantum computer. The common one-dimensional setup, however, greatly limits the system's scalability. By solving the critical problem of micromotion, we propose a two-dimensional architecture for scalable trapped-ion quantum computation. Hamiltonian tomography for many-body quantum systems is essential for benchmarking quantum computation and simulation. By employing dynamical decoupling, we propose a scalable scheme for full Hamiltonian tomography. The required number of measurements increases only polynomially with the system size, in contrast to an exponential scaling in common methods. Finally, we work toward the goal of demonstrating quantum supremacy. A number of sampling tasks, such as the boson sampling problem, have been proposed to be classically intractable under mild assumptions. An intermediate quantum computer can efficiently solve the sampling problem, but the correct operation of the device is not known to be classically verifiable. Toward practical verification, we present an experimental friendly scheme to extract useful and robust information from the quantum boson samplers based on coarse-grained measurements. In a separate study, we introduce a new model built from translation-invariant Ising-interacting spins. This model possesses several advantageous properties, catalyzing the ultimate experimental demonstration of quantum supremacy.
3D-printed components for quantum devices.
Saint, R; Evans, W; Zhou, Y; Barrett, T; Fromhold, T M; Saleh, E; Maskery, I; Tuck, C; Wildman, R; Oručević, F; Krüger, P
2018-05-30
Recent advances in the preparation, control and measurement of atomic gases have led to new insights into the quantum world and unprecedented metrological sensitivities, e.g. in measuring gravitational forces and magnetic fields. The full potential of applying such capabilities to areas as diverse as biomedical imaging, non-invasive underground mapping, and GPS-free navigation can only be realised with the scalable production of efficient, robust and portable devices. We introduce additive manufacturing as a production technique of quantum device components with unrivalled design freedom and rapid prototyping. This provides a step change in efficiency, compactness and facilitates systems integration. As a demonstrator we present an ultrahigh vacuum compatible ultracold atom source dissipating less than ten milliwatts of electrical power during field generation to produce large samples of cold rubidium gases. This disruptive technology opens the door to drastically improved integrated structures, which will further reduce size and assembly complexity in scalable series manufacture of bespoke portable quantum devices.
Molecular nanomagnets with switchable coupling for quantum simulation
Chiesa, Alessandro; Whitehead, George F. S.; Carretta, Stefano; ...
2014-12-11
Molecular nanomagnets are attractive candidate qubits because of their wide inter- and intra-molecular tunability. Uniform magnetic pulses could be exploited to implement one- and two-qubit gates in presence of a properly engineered pattern of interactions, but the synthesis of suitable and potentially scalable supramolecular complexes has proven a very hard task. Indeed, no quantum algorithms have ever been implemented, not even a proof-of-principle two-qubit gate. In this paper we show that the magnetic couplings in two supramolecular {Cr7Ni}-Ni-{Cr7Ni} assemblies can be chemically engineered to fit the above requisites for conditional gates with no need of local control. Microscopic parameters aremore » determined by a recently developed many-body ab-initio approach and used to simulate quantum gates. We find that these systems are optimal for proof-of-principle two-qubit experiments and can be exploited as building blocks of scalable architectures for quantum simulation.« less
Multi-Center Traffic Management Advisor Operational Field Test Results
NASA Technical Reports Server (NTRS)
Farley, Todd; Landry, Steven J.; Hoang, Ty; Nickelson, Monicarol; Levin, Kerry M.; Rowe, Dennis W.
2005-01-01
The Multi-Center Traffic Management Advisor (McTMA) is a research prototype system which seeks to bring time-based metering into the mainstream of air traffic control (ATC) operations. Time-based metering is an efficient alternative to traditional air traffic management techniques such as distance-based spacing (miles-in-trail spacing) and managed arrival reservoirs (airborne holding). While time-based metering has demonstrated significant benefit in terms of arrival throughput and arrival delay, its use to date has been limited to arrival operations at just nine airports nationally. Wide-scale adoption of time-based metering has been hampered, in part, by the limited scalability of metering automation. In order to realize the full spectrum of efficiency benefits possible with time-based metering, a much more modular, scalable time-based metering capability is required. With its distributed metering architecture, multi-center TMA offers such a capability.
FPGA cluster for high-performance AO real-time control system
NASA Astrophysics Data System (ADS)
Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.
2006-06-01
Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.
Master-slave control scheme in electric vehicle smart charging infrastructure.
Chung, Ching-Yen; Chynoweth, Joshua; Chu, Chi-Cheng; Gadh, Rajit
2014-01-01
WINSmartEV is a software based plug-in electric vehicle (PEV) monitoring, control, and management system. It not only incorporates intelligence at every level so that charge scheduling can avoid grid bottlenecks, but it also multiplies the number of PEVs that can be plugged into a single circuit. This paper proposes, designs, and executes many upgrades to WINSmartEV. These upgrades include new hardware that makes the level 1 and level 2 chargers faster, more robust, and more scalable. It includes algorithms that provide a more optimal charge scheduling for the level 2 (EVSE) and an enhanced vehicle monitoring/identification module (VMM) system that can automatically identify PEVs and authorize charging.
Master-Slave Control Scheme in Electric Vehicle Smart Charging Infrastructure
Chung, Ching-Yen; Chynoweth, Joshua; Chu, Chi-Cheng; Gadh, Rajit
2014-01-01
WINSmartEV is a software based plug-in electric vehicle (PEV) monitoring, control, and management system. It not only incorporates intelligence at every level so that charge scheduling can avoid grid bottlenecks, but it also multiplies the number of PEVs that can be plugged into a single circuit. This paper proposes, designs, and executes many upgrades to WINSmartEV. These upgrades include new hardware that makes the level 1 and level 2 chargers faster, more robust, and more scalable. It includes algorithms that provide a more optimal charge scheduling for the level 2 (EVSE) and an enhanced vehicle monitoring/identification module (VMM) system that can automatically identify PEVs and authorize charging. PMID:24982956
Monitoring service for the Gran Telescopio Canarias control system
NASA Astrophysics Data System (ADS)
Huertas, Manuel; Molgo, Jordi; Macías, Rosa; Ramos, Francisco
2016-07-01
The Monitoring Service collects, persists and propagates the Telescope and Instrument telemetry, for the Gran Telescopio CANARIAS (GTC), an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). A new version of the Monitoring Service has been developed in order to improve performance, provide high availability, guarantee fault tolerance and scalability to cope with high volume of data. The architecture is based on a distributed in-memory data store with a Product/Consumer pattern design. The producer generates the data samples. The consumers either persists the samples to a database for further analysis or propagates them to the consoles in the control room to monitorize the state of the whole system.
Evanescent-field-modulated two-qubit entanglement in an emitters-plasmon coupled system.
Zhang, Fan; Ren, Juanjuan; Duan, Xueke; Zhao, Chen; Gong, Qihuang; Gu, Ying
2018-06-13
Scalable integrated quantum information networks calls for controllable entanglement modulation at subwavelength scale. To reduce laser disturbance among adjacent nanostructures, here we theoretically demonstrate two-qubit entanglement modulated by an evanescent field of a dielectric nanowire in an emitter-AgNP coupled system. This coupled system is considered as a nano-cavity system embedded in an evanescent vacuum. Through varying the amplitude of evanescent field, the concurrence of steady-state entanglement can be modified from 0 to 0.75. Because the interaction between emitters and the nanowire is much weaker than that inside the coupled system, the range of modulation for two-qubit entanglement is insensitive to their distance. The evanescent field controlled entangled state engineering provides the possibility to avoid optical crosstalk for on-chip steady-state entanglement. © 2018 IOP Publishing Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... best demonstrate that they have the managerial and operational capacity, including significant and demonstrable scalability in their management, finances, systems, and infrastructure, to assume the...--Scalability in operations and management to perform timely, accurate, and comprehensive lender claims review...
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
Distributed PACS using distributed file system with hierarchical meta data servers.
Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato
2012-01-01
In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.
NASA Astrophysics Data System (ADS)
Grasso, J. R.; Bachèlery, P.
Self-organized systems are often used to describe natural phenomena where power laws and scale invariant geometry are observed. The Piton de la Fournaise volcano shows power-law behavior in many aspects. These include the temporal distribution of eruptions, the frequency-size distributions of induced earthquakes, dikes, fissures, lava flows and interflow periods, all evidence of self-similarity over a finite scale range. We show that the bounds to scale-invariance can be used to derive geomechanical constraints on both the volcano structure and the volcano mechanics. We ascertain that the present magma bodies are multi-lens reservoirs in a quasi-eruptive condition, i.e. a marginally critical state. The scaling organization of dynamic fluid-induced observables on the volcano, such as fluid induced earthquakes, dikes and surface fissures, appears to be controlled by underlying static hierarchical structure (geology) similar to that proposed for fluid circulations in human physiology. The emergence of saturation lengths for the scalable volcanic observable argues for the finite scalability of complex naturally self-organized critical systems, including volcano dynamics.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Py4Syn: Python for synchrotrons.
Slepicka, H H; Canova, H F; Beniz, D B; Piton, J R
2015-09-01
In this report, Py4Syn, an open-source Python-based library for data acquisition, device manipulation, scan routines and other helper functions, is presented. Driven by easy-to-use and scalability ideals, Py4Syn offers control system agnostic solution and high customization level for scans and data output, covering distinct techniques and facilities. Here, most of the library functionalities are described, examples of use are shown and ideas for future implementations are presented.
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
Control and design of multiple unmanned air vehicles for persistent surveillance
NASA Astrophysics Data System (ADS)
Nigam, Nikhil
Control of multiple autonomous aircraft for search and exploration, is a topic of current research interest for applications such as weather monitoring, geographical surveys, search and rescue, tactical reconnaissance, and extra-terrestrial exploration, and the need to distribute sensing is driven by considerations of efficiency, reliability, cost and scalability. Hence, this problem has been extensively studied in the fields of controls and artificial intelligence. The task of persistent surveillance is different from a coverage/exploration problem, in that all areas need to be continuously searched, minimizing the time between visitations to each region in the target space. This distinction does not allow a straightforward application of most exploration techniques to the problem, although ideas from these methods can still be used. The use of aerial vehicles is motivated by their ability to cover larger spaces and their relative insensitivity to terrain. However, the dynamics of Unmanned Air Vehicles (UAVs) adds complexity to the control problem. Most of the work in the literature decouples the vehicle dynamics and control policies, but their interaction is particularly interesting for a surveillance mission. Stochastic environments and UAV failures further enrich the problem by requiring the control policies to be robust, and this aspect is particularly important for hardware implementations. For a persistent mission, it becomes imperative to consider the range/endurance constraints of the vehicles. The coupling of the control policy with the endurance constraints of the vehicles is an aspect that has not been sufficiently explored. Design of UAVs for desirable mission performance is also an issue of considerable significance. The use of a single monolithic optimization for such a problem has practical limitations, and decomposition-based design is a potential alternative. In this research high-level control policies are devised, that are scalable, reliable, efficient, and robust to changes in the environment. Most of the existing techniques that carry performance guarantees are not scalable or robust to changes. The scalable techniques are often heuristic in nature, resulting in lack of reliability and performance. Our policies are tested in a multi-UAV simulation environment developed for this problem, and shown to be near-optimal in spite of being completely reactive in nature. We explicitly account for the coupling between aircraft dynamics and control policies as well, and suggest modifications to improve performance under dynamic constraints. A smart refueling policy is also developed to account for limited endurance, and large performance benefits are observed. The method is based on the solution of a linear program that can be efficiently solved online in a distributed setting, unlike previous work. The Vehicle Swarm Technology Laboratory (VSTL), a hardware testbed developed at Boeing Research and Technology for evaluating swarm of UAVs, is described next and used to test the control strategy in a real-world scenario. The simplicity and robustness of the strategy allows easy implementation and near replication of the performance observed in simulation. Finally, an architecture for system-of-systems design based on Collaborative Optimization (CO) is presented. Earlier work coupling operations and design has used frameworks that make certain assumptions not valid for this problem. The efficacy of our approach is illustrated through preliminary design results, and extension to more realistic settings is also demonstrated.
Scalability study of solid xenon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, J.; Cease, H.; Jaskierny, W. F.
2015-04-01
We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.
Scalability Assessments for the Malicious Activity Simulation Tool (MAST)
2012-09-01
the scalability characteristics of MAST. Specifically, we show that an exponential increase in clients using the MAST software does not impact...an exponential increase in clients using the MAST software does not impact network and system resources significantly. Additionally, we...31 1. Hardware .....................................31 2. Software .....................................32 3. Common PC
Schulz, Thomas C.; Young, Holly Y.; Agulnick, Alan D.; Babin, M. Josephine; Baetge, Emmanuel E.; Bang, Anne G.; Bhoumik, Anindita; Cepa, Igor; Cesario, Rosemary M.; Haakmeester, Carl; Kadoya, Kuniko; Kelly, Jonathan R.; Kerr, Justin; Martinson, Laura A.; McLean, Amanda B.; Moorman, Mark A.; Payne, Janice K.; Richardson, Mike; Ross, Kelly G.; Sherrer, Eric S.; Song, Xuehong; Wilson, Alistair Z.; Brandon, Eugene P.; Green, Chad E.; Kroon, Evert J.; Kelly, Olivia G.; D’Amour, Kevin A.; Robins, Allan J.
2012-01-01
Development of a human embryonic stem cell (hESC)-based therapy for type 1 diabetes will require the translation of proof-of-principle concepts into a scalable, controlled, and regulated cell manufacturing process. We have previously demonstrated that hESC can be directed to differentiate into pancreatic progenitors that mature into functional glucose-responsive, insulin-secreting cells in vivo. In this study we describe hESC expansion and banking methods and a suspension-based differentiation system, which together underpin an integrated scalable manufacturing process for producing pancreatic progenitors. This system has been optimized for the CyT49 cell line. Accordingly, qualified large-scale single-cell master and working cGMP cell banks of CyT49 have been generated to provide a virtually unlimited starting resource for manufacturing. Upon thaw from these banks, we expanded CyT49 for two weeks in an adherent culture format that achieves 50–100 fold expansion per week. Undifferentiated CyT49 were then aggregated into clusters in dynamic rotational suspension culture, followed by differentiation en masse for two weeks with a four-stage protocol. Numerous scaled differentiation runs generated reproducible and defined population compositions highly enriched for pancreatic cell lineages, as shown by examining mRNA expression at each stage of differentiation and flow cytometry of the final population. Islet-like tissue containing glucose-responsive, insulin-secreting cells was generated upon implantation into mice. By four- to five-months post-engraftment, mature neo-pancreatic tissue was sufficient to protect against streptozotocin (STZ)-induced hyperglycemia. In summary, we have developed a tractable manufacturing process for the generation of functional pancreatic progenitors from hESC on a scale amenable to clinical entry. PMID:22623968
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-10-28
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.
Huang, Yin; Zheng, Ning; Cheng, Zhiqiang; Chen, Ying; Lu, Bingwei; Xie, Tao; Feng, Xue
2016-12-28
Flexible and stretchable electronics offer a wide range of unprecedented opportunities beyond conventional rigid electronics. Despite their vast promise, a significant bottleneck lies in the availability of a transfer printing technique to manufacture such devices in a highly controllable and scalable manner. Current technologies usually rely on manual stick-and-place and do not offer feasible mechanisms for precise and quantitative process control, especially when scalability is taken into account. Here, we demonstrate a spatioselective and programmable transfer strategy to print electronic microelements onto a soft substrate. The method takes advantage of automated direct laser writing to trigger localized heating of a micropatterned shape memory polymer adhesive stamp, allowing highly controlled and spatioselective switching of the interfacial adhesion. This, coupled to the proper tuning of the stamp properties, enables printing with perfect yield. The wide range adhesion switchability further allows printing of hybrid electronic elements, which is otherwise challenging given the complex interfacial manipulation involved. Our temperature-controlled transfer printing technique shows its critical importance and obvious advantages in the potential scale-up of device manufacturing. Our strategy opens a route to manufacturing flexible electronics with exceptional versatility and potential scalability.
Autocorrel I: A Neural Network Based Network Event Correlation Approach
2005-05-01
which concern any component of the network. 2.1.1 Existing Intrusion Detection Systems EMERALD [8] is a distributed, scalable, hierarchal, customizable...writing this paper, the updaters of this system had not released their correlation unit to the public. EMERALD ex- plicitly divides statistical analysis... EMERALD , NetSTAT is scalable and composi- ble. QuidSCOR [12] is an open-source IDS, though it requires a subscription from its publisher, Qualys Inc
Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM
2009-03-17
A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.
ExSTraCS 2.0: Description and Evaluation of a Scalable Learning Classifier System.
Urbanowicz, Ryan J; Moore, Jason H
2015-09-01
Algorithmic scalability is a major concern for any machine learning strategy in this age of 'big data'. A large number of potentially predictive attributes is emblematic of problems in bioinformatics, genetic epidemiology, and many other fields. Previously, ExS-TraCS was introduced as an extended Michigan-style supervised learning classifier system that combined a set of powerful heuristics to successfully tackle the challenges of classification, prediction, and knowledge discovery in complex, noisy, and heterogeneous problem domains. While Michigan-style learning classifier systems are powerful and flexible learners, they are not considered to be particularly scalable. For the first time, this paper presents a complete description of the ExS-TraCS algorithm and introduces an effective strategy to dramatically improve learning classifier system scalability. ExSTraCS 2.0 addresses scalability with (1) a rule specificity limit, (2) new approaches to expert knowledge guided covering and mutation mechanisms, and (3) the implementation and utilization of the TuRF algorithm for improving the quality of expert knowledge discovery in larger datasets. Performance over a complex spectrum of simulated genetic datasets demonstrated that these new mechanisms dramatically improve nearly every performance metric on datasets with 20 attributes and made it possible for ExSTraCS to reliably scale up to perform on related 200 and 2000-attribute datasets. ExSTraCS 2.0 was also able to reliably solve the 6, 11, 20, 37, 70, and 135 multiplexer problems, and did so in similar or fewer learning iterations than previously reported, with smaller finite training sets, and without using building blocks discovered from simpler multiplexer problems. Furthermore, ExS-TraCS usability was made simpler through the elimination of previously critical run parameters.
Scalable analysis of nonlinear systems using convex optimization
NASA Astrophysics Data System (ADS)
Papachristodoulou, Antonis
In this thesis, we investigate how convex optimization can be used to analyze different classes of nonlinear systems at various scales algorithmically. The methodology is based on the construction of appropriate Lyapunov-type certificates using sum of squares techniques. After a brief introduction on the mathematical tools that we will be using, we turn our attention to robust stability and performance analysis of systems described by Ordinary Differential Equations. A general framework for constrained systems analysis is developed, under which stability of systems with polynomial, non-polynomial vector fields and switching systems, as well estimating the region of attraction and the L2 gain can be treated in a unified manner. We apply our results to examples from biology and aerospace. We then consider systems described by Functional Differential Equations (FDEs), i.e., time-delay systems. Their main characteristic is that they are infinite dimensional, which complicates their analysis. We first show how the complete Lyapunov-Krasovskii functional can be constructed algorithmically for linear time-delay systems. Then, we concentrate on delay-independent and delay-dependent stability analysis of nonlinear FDEs using sum of squares techniques. An example from ecology is given. The scalable stability analysis of congestion control algorithms for the Internet is investigated next. The models we use result in an arbitrary interconnection of FDE subsystems, for which we require that stability holds for arbitrary delays, network topologies and link capacities. Through a constructive proof, we develop a Lyapunov functional for FAST---a recently developed network congestion control scheme---so that the Lyapunov stability properties scale with the system size. We also show how other network congestion control schemes can be analyzed in the same way. Finally, we concentrate on systems described by Partial Differential Equations. We show that axially constant perturbations of the Navier-Stokes equations for Hagen-Poiseuille flow are globally stable, even though the background noise is amplified as R3 where R is the Reynolds number, giving a 'robust yet fragile' interpretation. We also propose a sum of squares methodology for the analysis of systems described by parabolic PDEs. We conclude this work with an account for future research.
Real-Time Wavefront Control for the PALM-3000 High Order Adaptive Optics System
NASA Technical Reports Server (NTRS)
Truong, Tuan N.; Bouchez, Antonin H.; Dekany, Richard G.; Guiwits, Stephen R.; Roberts, Jennifer E.; Troy, Mitchell
2008-01-01
We present a cost-effective scalable real-time wavefront control architecture based on off-the-shelf graphics processing units hosted in an ultra-low latency, high-bandwidth interconnect PC cluster environment composed of modules written in the component-oriented language of nesC. The architecture enables full-matrix reconstruction of the wavefront at up to 2 KHz with latency under 250 us for the PALM-3000 adaptive optics systems, a state-of-the-art upgrade on the 5.1 meter Hale Telescope that consists of a 64 x 64 subaperture Shack-Hartmann wavefront sensor and a 3368 active actuator high order deformable mirror in series with a 241 active actuator tweeter DM. The architecture can easily scale up to support much larger AO systems at higher rates and lower latency.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-01-01
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829
Developing a scalable artificial photosynthesis technology through nanomaterials by design
NASA Astrophysics Data System (ADS)
Lewis, Nathan S.
2016-12-01
An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
SP-100 - The national space reactor power system program in response to future needs
NASA Astrophysics Data System (ADS)
Armijo, J. S.; Josloff, A. T.; Bailey, H. S.; Matteo, D. N.
The SP-100 system has been designed to meet comprehensive and demanding NASA/DOD/DOE requirements. The key requirements include: nuclear safety for all mission phases, scalability from 10's to 100's of kWe, reliable performance at full power for seven years of partial power for ten years, survivability in civil or military threat environments, capability to operate autonomously for up to six months, capability to protect payloads from excessive radiation, and compatibility with shuttle and expendable launch vehicles. The authors address of major progress in terms of design, flexibility/scalability, survivability, and development. These areas, with the exception of survivability, are discussed in detail. There has been significant improvement in the generic flight system design with substantial mass savings and simplification that enhance performance and reliability. Design activity has confirmed the scalability and flexibility of the system and the ability to efficiently meet NASA, AF, and SDIO needs. SP-100 development continues to make significant progress in all key technology areas.
Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas
NASA Astrophysics Data System (ADS)
Glasser, Alan H.
2008-11-01
Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.
Wanted: Scalable Tracers for Diffusion Measurements
2015-01-01
Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586
Requirements for an Integrated UAS CNS Architecture
NASA Technical Reports Server (NTRS)
Templin, Fred L.; Jain, Raj; Sheffield, Greg; Taboso-Ballesteros, Pedro; Ponchak, Denise
2017-01-01
Communications, Navigation and Surveillance (CNS) requirements must be developed in order to establish a CNS architecture supporting Unmanned Air Systems integration in the National Air Space (UAS in the NAS). These requirements must address cybersecurity, future communications, satellite-based navigation and APNT, and scalable surveillance and situational awareness. CNS integration, consolidation and miniaturization requirements are also important to support the explosive growth in small UAS deployment. Air Traffic Management (ATM) must also be accommodated to support critical Command and Control (C2) for Air Traffic Controllers (ATC). This document therefore presents UAS CNS requirements that will guide the architecture.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.
Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.
NASA Astrophysics Data System (ADS)
Huang, T.; Alarcon, C.; Quach, N. T.
2014-12-01
Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.
USDA-ARS?s Scientific Manuscript database
A scalable and modular LED illumination dome for microscopic scientific photography is described and illustrated, and methods for constructing such a dome are detailed. Dome illumination for insect specimens has become standard practice across the field of insect systematics, but many dome designs ...
Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.
Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel
2015-11-01
This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware
NASA Technical Reports Server (NTRS)
Hereford, James; Gwaltney, David
2004-01-01
In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.
Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation
NASA Astrophysics Data System (ADS)
Quiroz, Gregory
Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.
Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbach, Carsten; Frings, Wolfgang
2013-02-22
This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less
NASA Astrophysics Data System (ADS)
MacDonald, B.; Finot, M.; Heiken, B.; Trowbridge, T.; Ackler, H.; Leonard, L.; Johnson, E.; Chang, B.; Keating, T.
2009-08-01
Skyline Solar Inc. has developed a novel silicon-based PV system to simultaneously reduce energy cost and improve scalability of solar energy. The system achieves high gain through a combination of high capacity factor and optical concentration. The design approach drives innovation not only into the details of the system hardware, but also into manufacturing and deployment-related costs and bottlenecks. The result of this philosophy is a modular PV system whose manufacturing strategy relies only on currently existing silicon solar cell, module, reflector and aluminum parts supply chains, as well as turnkey PV module production lines and metal fabrication industries that already exist at enormous scale. Furthermore, with a high gain system design, the generating capacity of all components is multiplied, leading to a rapidly scalable system. The product design and commercialization strategy cooperate synergistically to promise dramatically lower LCOE with substantially lower risk relative to materials-intensive innovations. In this paper, we will present the key design aspects of Skyline's system, including aspects of the optical, mechanical and thermal components, revealing the ease of scalability, low cost and high performance. Additionally, we will present performance and reliability results on modules and the system, using ASTM and UL/IEC methodologies.
Printed polymer photonic devices for optical interconnect systems
NASA Astrophysics Data System (ADS)
Subbaraman, Harish; Pan, Zeyu; Zhang, Cheng; Li, Qiaochu; Guo, L. J.; Chen, Ray T.
2016-03-01
Polymer photonic device fabrication usually relies on the utilization of clean-room processes, including photolithography, e-beam lithography, reactive ion etching (RIE) and lift-off methods etc, which are expensive and are limited to areas as large as a wafer. Utilizing a novel and a scalable printing process involving ink-jet printing and imprinting, we have fabricated polymer based photonic interconnect components, such as electro-optic polymer based modulators and ring resonator switches, and thermo-optic polymer switch based delay networks and demonstrated their operation. Specifically, a modulator operating at 15MHz and a 2-bit delay network providing up to 35.4ps are presented. In this paper, we also discuss the manufacturing challenges that need to be overcome in order to make roll-to-roll manufacturing practically viable. We discuss a few manufacturing challenges, such as inspection and quality control, registration, and web control, that need to be overcome in order to realize true implementation of roll-to-roll manufacturing of flexible polymer photonic systems. We have overcome these challenges, and currently utilizing our inhouse developed hardware and software tools, <10μm alignment accuracy at a 5m/min is demonstrated. Such a scalable roll-to-roll manufacturing scheme will enable the development of unique optoelectronic devices which can be used in a myriad of different applications, including communication, sensing, medicine, security, imaging, energy, lighting etc.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Remote Energy Monitoring System via Cellular Network
NASA Astrophysics Data System (ADS)
Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi
Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.
Integration of multi-interface conversion channel using FPGA for modular photonic network
NASA Astrophysics Data System (ADS)
Janicki, Tomasz; Pozniak, Krzysztof T.; Romaniuk, Ryszard S.
2010-09-01
The article discusses the integration of different types of interfaces with FPGA circuits using a reconfigurable communication platform. The solution has been implemented in practice in a single node of a distributed measurement system. Construction of communication platform has been presented with its selected hardware modules, described in VHDL and implemented in FPGA circuits. The graphical user interface (GUI) has been described that allows a user to control the operation of the system. In the final part of the article selected practical solutions have been introduced. The whole measurement system resides on multi-gigabit optical network. The optical network construction is highly modular, reconfigurable and scalable.
NASA Technical Reports Server (NTRS)
Bugby, David C.; Farmer, Jeffery T.; Stouffer, Charles J.
2013-01-01
This paper describes the development and testing of a scalable thermal management architecture for instruments, subsystems, or systems that must operate in severe space environments with wide variations in sink temperature. The architecture involves a serial linkage of one or more hot-side variable conductance heat pipes (VCHPs) to one or more cold-side loop heat pipes (LHPs). The VCHPs provide wide area heat acquisition, limited distance thermal transport, modest against gravity pumping, concentrated LHP startup heating, and high switching ratio variable conductance operation. The LHPs provide localized heat acquisition, long distance thermal transport, significant against gravity pumping, and high switching ratio variable conductance operation. The single-VCHP, single-LHP system described herein was developed to maintain thermal control of a small robotic lunar lander throughout the lunar day-night thermal cycle. It is also applicable to other variable heat rejection space missions in severe environments. Operationally, despite a 60-70% gas blocked VCHP condenser during ON testing, the system was still able to provide 2-4 W/K ON conductance, 0.01 W/K OFF conductance, and an end-to-end switching ratio of 200-400. The paper provides a detailed analysis of VCHP condenser performance, which quantified the gas blockage situation. Future multi-VCHP/multi-LHP thermal management system concepts that provide power/transport length scalability are also discussed.
Agent-Based Intelligent Interface for Wheelchair Movement Control
Barriuso, Alberto L.; De Paz, Juan F.
2018-01-01
People who suffer from any kind of motor difficulty face serious complications to autonomously move in their daily lives. However, a growing number research projects which propose different powered wheelchairs control systems are arising. Despite of the interest of the research community in the area, there is no platform that allows an easy integration of various control methods that make use of heterogeneous sensors and computationally demanding algorithms. In this work, an architecture based on virtual organizations of agents is proposed that makes use of a flexible and scalable communication protocol that allows the deployment of embedded agents in computationally limited devices. In order to validate the proper functioning of the proposed system, it has been integrated into a conventional wheelchair and a set of alternative control interfaces have been developed and deployed, including a portable electroencephalography system, a voice interface or as specifically designed smartphone application. A set of tests were conducted to test both the platform adequacy and the accuracy and ease of use of the proposed control systems yielding positive results that can be useful in further wheelchair interfaces design and implementation. PMID:29751603
2015-01-01
The scalable chemical vapor deposition of monolayer hexagonal boron nitride (h-BN) single crystals, with lateral dimensions of ∼0.3 mm, and of continuous h-BN monolayer films with large domain sizes (>25 μm) is demonstrated via an admixture of Si to Fe catalyst films. A simple thin-film Fe/SiO2/Si catalyst system is used to show that controlled Si diffusion into the Fe catalyst allows exclusive nucleation of monolayer h-BN with very low nucleation densities upon exposure to undiluted borazine. Our systematic in situ and ex situ characterization of this catalyst system establishes a basis for further rational catalyst design for compound 2D materials. PMID:25664483
Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics.
Sadler, Brian M; Hoyos, Sebastian
2014-01-01
The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control.
Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics
Sadler, Brian M; Hoyos, Sebastian
2014-01-01
The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control. PMID:26601042
Model-based Executive Control through Reactive Planning for Autonomous Rovers
NASA Technical Reports Server (NTRS)
Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola
2004-01-01
This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.
Atom-by-atom assembly of defect-free one-dimensional cold atom arrays.
Endres, Manuel; Bernien, Hannes; Keesling, Alexander; Levine, Harry; Anschuetz, Eric R; Krajenbrink, Alexandre; Senko, Crystal; Vuletic, Vladan; Greiner, Markus; Lukin, Mikhail D
2016-11-25
The realization of large-scale fully controllable quantum systems is an exciting frontier in modern physical science. We use atom-by-atom assembly to implement a platform for the deterministic preparation of regular one-dimensional arrays of individually controlled cold atoms. In our approach, a measurement and feedback procedure eliminates the entropy associated with probabilistic trap occupation and results in defect-free arrays of more than 50 atoms in less than 400 milliseconds. The technique is based on fast, real-time control of 100 optical tweezers, which we use to arrange atoms in desired geometric patterns and to maintain these configurations by replacing lost atoms with surplus atoms from a reservoir. This bottom-up approach may enable controlled engineering of scalable many-body systems for quantum information processing, quantum simulations, and precision measurements. Copyright © 2016, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Doty, Matthew F.; Ma, Xiangyu; Zide, Joshua M. O.; Bryant, Garnett W.
2017-09-01
Self-assembled InAs Quantum Dots (QDs) are often called "artificial atoms" and have long been of interest as components of quantum photonic and spintronic devices. Although there has been substantial progress in demonstrating optical control of both single spins confined to a single QD and entanglement between two separated QDs, the path toward scalable quantum photonic devices based on spins remains challenging. Quantum Dot Molecules, which consist of two closely-spaced InAs QDs, have unique properties that can be engineered with the solid state analog of molecular engineering in which the composition, size, and location of both the QDs and the intervening barrier are controlled during growth. Moreover, applied electric, magnetic, and optical fields can be used to modulate, in situ, both the spin and optical properties of the molecular states. We describe how the unique photonic properties of engineered Quantum Dot Molecules can be leveraged to overcome long-standing challenges to the creation of scalable quantum devices that manipulate single spins via photonics.
Arrays of individually controlled ions suitable for two-dimensional quantum simulations
Mielenz, Manuel; Kalis, Henning; Wittemer, Matthias; ...
2016-06-13
A precisely controlled quantum system may reveal a fundamental understanding of another, less accessible system of interest. A universal quantum computer is currently out of reach, but an analogue quantum simulator that makes relevant observables, interactions and states of a quantum model accessible could permit insight into complex dynamics. Several platforms have been suggested and proof-of-principle experiments have been conducted. Here, we operate two-dimensional arrays of three trapped ions in individually controlled harmonic wells forming equilateral triangles with side lengths 40 and 80 μm. In our approach, which is scalable to arbitrary two-dimensional lattices, we demonstrate individual control of themore » electronic and motional degrees of freedom, preparation of a fiducial initial state with ion motion close to the ground state, as well as a tuning of couplings between ions within experimental sequences. Lastly, our work paves the way towards a quantum simulator of two-dimensional systems designed at will.« less
Automated software configuration in the MONSOON system
NASA Astrophysics Data System (ADS)
Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.
2004-09-01
MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.
Model-Based Self-Tuning Multiscale Method for Combustion Control
NASA Technical Reports Server (NTRS)
Le, Dzu, K.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.
2006-01-01
A multi-scale representation of the combustor dynamics was used to create a self-tuning, scalable controller to suppress multiple instability modes in a liquid-fueled aero engine-derived combustor operating at engine-like conditions. Its self-tuning features designed to handle the uncertainties in the combustor dynamics and time-delays are essential for control performance and robustness. The controller was implemented to modulate a high-frequency fuel valve with feedback from dynamic pressure sensors. This scalable algorithm suppressed pressure oscillations of different instability modes by as much as 90 percent without the peak-splitting effect. The self-tuning logic guided the adjustment of controller parameters and converged quickly toward phase-lock for optimal suppression of the instabilities. The forced-response characteristics of the control model compare well with those of the test rig on both the frequency-domain and the time-domain.
Development of slow control system for the Belle II ARICH counter
NASA Astrophysics Data System (ADS)
Yonenaga, M.; Adachi, I.; Dolenec, R.; Hataya, K.; Iori, S.; Iwata, S.; Kakuno, H.; Kataura, R.; Kawai, H.; Kindo, H.; Kobayashi, T.; Korpar, S.; Križan, P.; Kumita, T.; Mrvar, M.; Nishida, S.; Ogawa, K.; Ogawa, S.; Pestotnik, R.; Šantelj, L.; Sumiyoshi, T.; Tabata, M.; Yusa, Y.
2017-12-01
A slow control system (SCS) for the Aerogel Ring Imaging Cherenkov (ARICH) counter in the Belle II experiment was newly developed and coded in the development frameworks of the Belle II DAQ software. The ARICH is based on 420 Hybrid Avalanche Photo-Detectors (HAPDs). Each HAPD has 144 pixels to be readout and requires 6 power supply (PS) channels, therefore a total number of 2520 PS channels and 60,480 pixels have to be configured and controlled. Graphical User Interfaces (GUIs) with detector oriented view and device oriented view, were also implemented to ease the detector operation. The ARICH SCS is in operation for detector construction and cosmic rays tests. The paper describes the detailed features of the SCS and preliminary results of operation of a reduced set of hardware which confirm the scalability to the full detector.
Interferometer design and controls for pulse stacking in high power fiber lasers
NASA Astrophysics Data System (ADS)
Wilcox, Russell; Yang, Yawei; Dahlen, Dar; Xu, Yilun; Huang, Gang; Qiang, Du; Doolittle, Lawrence; Byrd, John; Leemans, Wim; Ruppe, John; Zhou, Tong; Sheikhsofla, Morteza; Nees, John; Galvanauskas, Almantas; Dawson, Jay; Chen, Diana; Pax, Paul
2017-03-01
In order to develop a design for a laser-plasma accelerator (LPA) driver, we demonstrate key technologies that enable fiber lasers to produce high energy, ultrafast pulses. These technologies must be scalable, and operate in the presence of thermal drift, acoustic noise, and other perturbations typical of an operating system. We show that coherent pulse stacking (CPS), which requires optical interferometers, can be made robust by image-relaying, multipass optical cavities, and by optical phase control schemes that sense pulse train amplitudes from each cavity. A four-stage pulse stacking system using image-relaying cavities is controlled for 14 hours using a pulse-pattern sensing algorithm. For coherent addition of simultaneous ultrafast pulses, we introduce a new scheme using diffractive optics, and show experimentally that four pulses can be added while a preserving pulse width of 128 fs.
Beyond NextGen: AutoMax Overview and Update
NASA Technical Reports Server (NTRS)
Kopardekar, Parimal; Alexandrov, Natalia
2013-01-01
Main Message: National and Global Needs - Develop scalable airspace operations management system to accommodate increased mobility needs, emerging airspace uses, mix, future demand. Be affordable and economically viable. Sense of Urgency. Saturation (delays), emerging airspace uses, proactive development. Autonomy is Needed for Airspace Operations to Meet Future Needs. Costs, time critical decisions, mobility, scalability, limits of cognitive workload. AutoMax to Accommodate National and Global Needs. Auto: Automation, autonomy, autonomicity for airspace operations. Max: Maximizing performance of the National Airspace System. Interesting Challenges and Path Forward.
A comparison of decentralized, distributed, and centralized vibro-acoustic control.
Frampton, Kenneth D; Baumann, Oliver N; Gardonio, Paolo
2010-11-01
Direct velocity feedback control of structures is well known to increase structural damping and thus reduce vibration. In multi-channel systems the way in which the velocity signals are used to inform the actuators ranges from decentralized control, through distributed or clustered control to fully centralized control. The objective of distributed controllers is to exploit the anticipated performance advantage of the centralized control while maintaining the scalability, ease of implementation, and robustness of decentralized control. However, and in seeming contradiction, some investigations have concluded that decentralized control performs as well as distributed and centralized control, while other results have indicated that distributed control has significant performance advantages over decentralized control. The purpose of this work is to explain this seeming contradiction in results, to explore the effectiveness of decentralized, distributed, and centralized vibro-acoustic control, and to expand the concept of distributed control to include the distribution of the optimization process and the cost function employed.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data
Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
2016-01-01
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113
Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
2016-01-01
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).
Transforming the NAS: The Next Generation Air Traffic Control System
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2004-01-01
The next-generation air traffic control system must be designed to safely and efficiently accommodate the large growth of traffic expected in the near future. It should be sufficiently scalable to contend with the factor of 2 or more increase in demand expected by the year 2020. Analysis has shown that the current method of controlling air traffic cannot be scaled up to provide such levels of capacity. Therefore, to achieve a large increase in capacity while also giving pilots increased freedom to optimize their flight trajectories requires a fundamental change in the way air traffic is controlled. The key to achieving a factor of 2 or more increase in airspace capacity is to automate separation monitoring and control and to use an air-ground data link to send trajectories and clearances directly between ground-based and airborne systems. In addition to increasing capacity and offering greater flexibility in the selection of trajectories, this approach also has the potential to increase safety by reducing controller and pilot errors that occur in routine monitoring and voice communication tasks.
Building Scalable Knowledge Graphs for Earth Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.
2017-12-01
Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
2010-08-01
facility. 2.2.4 The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) The NASA-TLX is a subjective workload assessment tool ...No 6 NR If so, what type? Davinci (1) 14. Please describe the conditions under which you used the robotic system. Surgical (1) 15...2 Fantastic tool . Can see how this will save more lives by using the robot as the recon tool . 1 Interesting and fun. 3 Easily understood what to
Development of a Scalable Process Control System for Chemical Soil Washing to Remove Uranyl Oxide
2015-05-01
ICET also has a fully equipped counting laboratory for the evaluation of radioactive samples . Photographs of the 1-meter and 3-meter motorized...the leachate will be monitored using a gamma detector. There are numerous naturally occurring radioactive materials in soil . ICET has developed a...48.6% from 238U and 49.2% from 234U. The 238U in NU also contains daughters that are radioactive . This increases the activity of samples over long
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod
2006-04-15
In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparentmore » and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.« less
Scalable real space pseudopotential density functional codes for materials in the exascale regime
NASA Astrophysics Data System (ADS)
Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack
Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Schröder, Tim; Trusheim, Matthew E.; Walsh, Michael; Li, Luozhou; Zheng, Jiabao; Schukraft, Marco; Sipahigil, Alp; Evans, Ruffin E.; Sukachev, Denis D.; Nguyen, Christian T.; Pacheco, Jose L.; Camacho, Ryan M.; Bielejec, Edward S.; Lukin, Mikhail D.; Englund, Dirk
2017-01-01
The controlled creation of defect centre—nanocavity systems is one of the outstanding challenges for efficiently interfacing spin quantum memories with photons for photon-based entanglement operations in a quantum network. Here we demonstrate direct, maskless creation of atom-like single silicon vacancy (SiV) centres in diamond nanostructures via focused ion beam implantation with ∼32 nm lateral precision and <50 nm positioning accuracy relative to a nanocavity. We determine the Si+ ion to SiV centre conversion yield to be ∼2.5% and observe a 10-fold conversion yield increase by additional electron irradiation. Low-temperature spectroscopy reveals inhomogeneously broadened ensemble emission linewidths of ∼51 GHz and close to lifetime-limited single-emitter transition linewidths down to 126±13 MHz corresponding to ∼1.4 times the natural linewidth. This method for the targeted generation of nearly transform-limited quantum emitters should facilitate the development of scalable solid-state quantum information processors. PMID:28548097
Schroder, Tim; Trusheim, Matthew E.; Walsh, Michael; ...
2017-05-26
The controlled creation of defect centre—nanocavity systems is one of the outstanding challenges for efficiently interfacing spin quantum memories with photons for photon-based entanglement operations in a quantum network. Here we demonstrate direct, maskless creation of atom-like single silicon vacancy (SiV) centres in diamond nanostructures via focused ion beam implantation with ~32 nm lateral precision and <50 nm positioning accuracy relative to a nanocavity. We determine the Si+ ion to SiV centre conversion yield to be ~2.5% and observe a 10-fold conversion yield increase by additional electron irradiation. Low-temperature spectroscopy reveals inhomogeneously broadened ensemble emission linewidths of ~51 GHz andmore » close to lifetime-limited single-emitter transition linewidths down to 126±13 MHz corresponding to ~1.4 times the natural linewidth. Furthermore, this method for the targeted generation of nearly transform-limited quantum emitters should facilitate the development of scalable solid-state quantum information processors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroder, Tim; Trusheim, Matthew E.; Walsh, Michael
The controlled creation of defect centre—nanocavity systems is one of the outstanding challenges for efficiently interfacing spin quantum memories with photons for photon-based entanglement operations in a quantum network. Here we demonstrate direct, maskless creation of atom-like single silicon vacancy (SiV) centres in diamond nanostructures via focused ion beam implantation with ~32 nm lateral precision and <50 nm positioning accuracy relative to a nanocavity. We determine the Si+ ion to SiV centre conversion yield to be ~2.5% and observe a 10-fold conversion yield increase by additional electron irradiation. Low-temperature spectroscopy reveals inhomogeneously broadened ensemble emission linewidths of ~51 GHz andmore » close to lifetime-limited single-emitter transition linewidths down to 126±13 MHz corresponding to ~1.4 times the natural linewidth. Furthermore, this method for the targeted generation of nearly transform-limited quantum emitters should facilitate the development of scalable solid-state quantum information processors.« less
Emerging Technologies for Assembly of Microscale Hydrogels
Kavaz, Doga; Demirel, Melik C.; Demirci, Utkan
2013-01-01
Assembly of cell encapsulating building blocks (i.e., microscale hydrogels) has significant applications in areas including regenerative medicine, tissue engineering, and cell-based in vitro assays for pharmaceutical research and drug discovery. Inspired by the repeating functional units observed in native tissues and biological systems (e.g., the lobule in liver, the nephron in kidney), assembly technologies aim to generate complex tissue structures by organizing microscale building blocks. Novel assembly technologies enable fabrication of engineered tissue constructs with controlled properties including tunable microarchitectural and predefined compositional features. Recent advances in micro- and nano-scale technologies have enabled engineering of microgel based three dimensional (3D) constructs. There is a need for high-throughput and scalable methods to assemble microscale units with a complex 3D micro-architecture. Emerging assembly methods include novel technologies based on microfluidics, acoustic and magnetic fields, nanotextured surfaces, and surface tension. In this review, we survey emerging microscale hydrogel assembly methods offering rapid, scalable microgel assembly in 3D, and provide future perspectives and discuss potential applications. PMID:23184717
Energy-efficient quantum computing
NASA Astrophysics Data System (ADS)
Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko
2017-04-01
In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.
Generating a Reduced Gravity Environment on Earth
NASA Technical Reports Server (NTRS)
Dungan, Larry K.; Cunningham, Tom; Poncia, Dina
2010-01-01
Since the 1950s several reduced gravity simulators have been designed and utilized in preparing humans for spaceflight and in reduced gravity system development. The Active Response Gravity Offload System (ARGOS) is the newest and most realistic gravity offload simulator. ARGOS provides three degrees of motion within the test area and is scalable for full building deployment. The inertia of the overhead system is eliminated by an active motor and control system. This presentation will discuss what ARGOS is, how it functions, and the unique challenges of interfacing to the human. Test data and video for human and robotic systems will be presented. A major variable in the human machine interaction is the interface of ARGOS to the human. These challenges along with design solutions will be discussed.
Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance.
Vandersypen, L M; Steffen, M; Breyta, G; Yannoni, C S; Sherwood, M H; Chuang, I L
The number of steps any classical computer requires in order to find the prime factors of an l-digit integer N increases exponentially with l, at least using algorithms known at present. Factoring large integers is therefore conjectured to be intractable classically, an observation underlying the security of widely used cryptographic codes. Quantum computers, however, could factor integers in only polynomial time, using Shor's quantum factoring algorithm. Although important for the study of quantum computers, experimental demonstration of this algorithm has proved elusive. Here we report an implementation of the simplest instance of Shor's algorithm: factorization of N = 15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule as quantum bits, which can be manipulated with room temperature liquid-state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to systems containing many quantum bits, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects in our system.
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
AsyncStageOut: Distributed user data management for CMS Analysis
NASA Astrophysics Data System (ADS)
Riahi, H.; Wildish, T.; Ciangottini, D.; Hernández, J. M.; Andreeva, J.; Balcas, J.; Karavakis, E.; Mascheroni, M.; Tanasijczuk, A. J.; Vaandering, E. W.
2015-12-01
AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.
AsyncStageOut: Distributed User Data Management for CMS Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riahi, H.; Wildish, T.; Ciangottini, D.
2015-12-23
AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirementsmore » for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.« less
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Soenksen, L R; Kassis, T; Noh, M; Griffith, L G; Trumper, D L
2018-03-13
Precise fluid height sensing in open-channel microfluidics has long been a desirable feature for a wide range of applications. However, performing accurate measurements of the fluid level in small-scale reservoirs (<1 mL) has proven to be an elusive goal, especially if direct fluid-sensor contact needs to be avoided. In particular, gravity-driven systems used in several microfluidic applications to establish pressure gradients and impose flow remain open-loop and largely unmonitored due to these sensing limitations. Here we present an optimized self-shielded coplanar capacitive sensor design and automated control system to provide submillimeter fluid-height resolution (∼250 μm) and control of small-scale open reservoirs without the need for direct fluid contact. Results from testing and validation of our optimized sensor and system also suggest that accurate fluid height information can be used to robustly characterize, calibrate and dynamically control a range of microfluidic systems with complex pumping mechanisms, even in cell culture conditions. Capacitive sensing technology provides a scalable and cost-effective way to enable continuous monitoring and closed-loop feedback control of fluid volumes in small-scale gravity-dominated wells in a variety of microfluidic applications.
NASA Technical Reports Server (NTRS)
Aiken, Alexander
2001-01-01
The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
Robot formation control in stealth mode with scalable team size
NASA Astrophysics Data System (ADS)
Yu, Hongjun; Shi, Peng; Lim, Cheng-Chew
2016-11-01
In situations where robots need to keep electromagnetic silent in a formation, communication channels become unavailable. Moreover, as passive displacement sensors are used, limited sensing ranges are inevitable due to power insufficiency and limited noise reduction. To address the formation control problem for a scalable team of robots subject to the above restrictions, a flexible strategy is necessary. In this paper, under the assumption that the data transmission among the robots is not available, a novel controller and a protocol are designed that do not rely on communication. As the controller only drives the robots to a partially desired formation, a distributed coordination protocol is proposed to resolve the imperfections. It is shown that the effectiveness of the controller and the protocol rely on the formation connectivity, and a condition is given on the sensing range. Simulations are conducted to illustrate the feasibility and advantages of the new design scheme developed.
A novel processing platform for post tape out flows
NASA Astrophysics Data System (ADS)
Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.
2018-03-01
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
NASA Astrophysics Data System (ADS)
Hosford, Kyle S.
Clean distributed generation power plants can provide a much needed balance to our energy infrastructure in the future. A high-temperature fuel cell and an absorption chiller can be integrated to create an ideal combined cooling, heat, and power system that is efficient, quiet, fuel flexible, scalable, and environmentally friendly. With few real-world installations of this type, research remains to identify the best integration and operating strategy and to evaluate the economic viability and market potential of this system. This thesis informs and documents the design of a high-temperature fuel cell and absorption chiller demonstration system at a generic office building on the University of California, Irvine (UCI) campus. This work details the extension of prior theoretical work to a financially-viable power purchase agreement (PPA) with regard to system design, equipment sizing, and operating strategy. This work also addresses the metering and monitoring for the system showcase and research and details the development of a MATLAB code to evaluate the economics associated with different equipment selections, building loads, and economic parameters. The series configuration of a high-temperature fuel cell, heat recovery unit, and absorption chiller with chiller exhaust recirculation was identified as the optimal system design for the installation in terms of efficiency, controls, ducting, and cost. The initial economic results show that high-temperature fuel cell and absorption chiller systems are already economically competitive with utility-purchased generation, and a brief case study of a southern California hospital shows that the systems are scalable and viable for larger stationary power applications.
Optoelectronic Fibers via Selective Amplification of In-Fiber Capillary Instabilities.
Wei, Lei; Hou, Chong; Levy, Etgar; Lestoquoy, Guillaume; Gumennik, Alexander; Abouraddy, Ayman F; Joannopoulos, John D; Fink, Yoel
2017-01-01
Thermally drawn metal-insulator-semiconductor fibers provide a scalable path to functional fibers. Here, a ladder-like metal-semiconductor-metal photodetecting device is formed inside a single silica fiber in a controllable and scalable manner, achieving a high density of optoelectronic components over the entire fiber length and operating at a bandwidth of 470 kHz, orders of magnitude larger than any other drawn fiber device. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A multiplexed microfluidic system for evaluation of dynamics of immune-tumor interactions.
Moore, N; Doty, D; Zielstorff, M; Kariv, I; Moy, L Y; Gimbel, A; Chevillet, J R; Lowry, N; Santos, J; Mott, V; Kratchman, L; Lau, T; Addona, G; Chen, H; Borenstein, J T
2018-05-25
Recapitulation of the tumor microenvironment is critical for probing mechanisms involved in cancer, and for evaluating the tumor-killing potential of chemotherapeutic agents, targeted therapies and immunotherapies. Microfluidic devices have emerged as valuable tools for both mechanistic studies and for preclinical evaluation of therapeutic agents, due to their ability to precisely control drug concentrations and gradients of oxygen and other species in a scalable and potentially high throughput manner. Most existing in vitro microfluidic cancer models are comprised of cultured cancer cells embedded in a physiologically relevant matrix, collocated with vascular-like structures. However, the recent emergence of immune checkpoint inhibitors (ICI) as a powerful therapeutic modality against many cancers has created a need for preclinical in vitro models that accommodate interactions between tumors and immune cells, particularly for assessment of unprocessed tumor fragments harvested directly from patient biopsies. Here we report on a microfluidic model, termed EVIDENT (ex vivo immuno-oncology dynamic environment for tumor biopsies), that accommodates up to 12 separate tumor biopsy fragments interacting with flowing tumor-infiltrating lymphocytes (TILs) in a dynamic microenvironment. Flow control is achieved with a single pump in a simple and scalable configuration, and the entire system is constructed using low-sorption materials, addressing two principal concerns with existing microfluidic cancer models. The system sustains tumor fragments for multiple days, and permits real-time, high-resolution imaging of the interaction between autologous TILs and tumor fragments, enabling mapping of TIL-mediated tumor killing and testing of various ICI treatments versus tumor response. Custom image analytic algorithms based on machine learning reported here provide automated and quantitative assessment of experimental results. Initial studies indicate that the system is capable of quantifying temporal levels of TIL infiltration and tumor death, and that the EVIDENT model mimics the known in vivo tumor response to anti-PD-1 ICI treatment of flowing TILs relative to isotype control treatments for syngeneic mouse MC38 tumors.
Citizen science provides a reliable and scalable tool to track disease-carrying mosquitoes.
Palmer, John R B; Oltra, Aitana; Collantes, Francisco; Delgado, Juan Antonio; Lucientes, Javier; Delacour, Sarah; Bengoa, Mikel; Eritja, Roger; Bartumeus, Frederic
2017-10-24
Recent outbreaks of Zika, chikungunya and dengue highlight the importance of better understanding the spread of disease-carrying mosquitoes across multiple spatio-temporal scales. Traditional surveillance tools are limited by jurisdictional boundaries and cost constraints. Here we show how a scalable citizen science system can solve this problem by combining citizen scientists' observations with expert validation and correcting for sampling effort. Our system provides accurate early warning information about the Asian tiger mosquito (Aedes albopictus) invasion in Spain, well beyond that available from traditional methods, and vital for public health services. It also provides estimates of tiger mosquito risk comparable to those from traditional methods but more directly related to the human-mosquito encounters that are relevant for epidemiological modelling and scalable enough to cover the entire country. These results illustrate how powerful public participation in science can be and suggest citizen science is positioned to revolutionize mosquito-borne disease surveillance worldwide.
An efficient and provable secure revocable identity-based encryption scheme.
Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia
2014-01-01
Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.
Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya
2006-01-01
We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Perumalla, Kalyan S; Hirshman, Steven Paul
2013-01-01
Simulations that require solutions of block tridiagonal systems of equations rely on fast parallel solvers for runtime efficiency. Leading parallel solvers that are highly effective for general systems of equations, dense or sparse, are limited in scalability when applied to block tridiagonal systems. This paper presents scalability results as well as detailed analyses of two parallel solvers that exploit the special structure of block tridiagonal matrices to deliver superior performance, often by orders of magnitude. A rigorous analysis of their relative parallel runtimes is shown to reveal the existence of a critical block size that separates the parameter space spannedmore » by the number of block rows, the block size and the processor count, into distinct regions that favor one or the other of the two solvers. Dependence of this critical block size on the above parameters as well as on machine-specific constants is established. These formal insights are supported by empirical results on up to 2,048 cores of a Cray XT4 system. To the best of our knowledge, this is the highest reported scalability for parallel block tridiagonal solvers to date.« less
Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.
Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng
2013-11-01
The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.
Chaudhury, Arindam; Kongchan, Natee; Gengler, Jon P.; Mohanty, Vakul; Christiansen, Audrey E.; Fachini, Joseph M.; Martin, James F.; Neilson, Joel R.
2014-01-01
Regulation of messenger ribonucleic acid (mRNA) subcellular localization, stability and translation is a central aspect of gene expression. Much of this control is mediated via recognition of mRNA 3′ untranslated regions (UTRs) by microRNAs (miRNAs) and RNA-binding proteins. The gold standard approach to assess the regulation imparted by a transcript's 3′ UTR is to fuse the UTR to a reporter coding sequence and assess the relative expression of this reporter as compared to a control. Yet, transient transfection approaches or the use of highly active viral promoter elements may overwhelm a cell's post-transcriptional regulatory machinery in this context. To circumvent this issue, we have developed and validated a novel, scalable piggyBac-based vector for analysis of 3′ UTR-mediated regulation in vitro and in vivo. The vector delivers three independent transcription units to the target genome—a selection cassette, a turboGFP control reporter and an experimental reporter expressed under the control of a 3′ UTR of interest. The pBUTR (piggyBac-based 3′ UnTranslated Region reporter) vector performs robustly as a siRNA/miRNA sensor, in established in vitro models of post-transcriptional regulation, and in both arrayed and pooled screening approaches. The vector is robustly expressed as a transgene during murine embryogenesis, highlighting its potential usefulness for revealing post-transcriptional regulation in an in vivo setting. PMID:24753411
Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J
2011-01-01
The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.
Toward Scalable Benchmarks for Mass Storage Systems
NASA Technical Reports Server (NTRS)
Miller, Ethan L.
1996-01-01
This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.
Security middleware infrastructure for DICOM images in health information systems.
Kallepalli, Vijay N V; Ehikioya, Sylvanus A; Camorlinga, Sergio; Rueda, Jose A
2003-12-01
In health care, it is mandatory to maintain the privacy and confidentiality of medical data. To achieve this, a fine-grained access control and an access log for accessing medical images are two important aspects that need to be considered in health care systems. Fine-grained access control provides access to medical data only to authorized persons based on priority, location, and content. A log captures each attempt to access medical data. This article describes an overall middleware infrastructure required for secure access to Digital Imaging and Communication in Medicine (DICOM) images, with an emphasis on access control and log maintenance. We introduce a hybrid access control model that combines the properties of two existing models. A trust relationship between hospitals is used to make the hybrid access control model scalable across hospitals. We also discuss events that have to be logged and where the log has to be maintained. A prototype of security middleware infrastructure is implemented.
Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less
Offset Printing Plate Quality Sensor on a Low-Cost Processor
Poljak, Jelena; Botella, Guillermo; García, Carlos; Poljaček, Sanja Mahović; Prieto-Matías, Manuel; Tirado, Francisco
2013-01-01
The aim of this work is to develop a microprocessor-based sensor that measures the quality of the offset printing plate through the introduction of different image analysis applications. The main features of the presented system are the low cost, the low amount of power consumption, its modularity and easy integration with other industrial modules for printing plates, and its robustness against noise environments. For the sake of clarity, a viability analysis of previous software is presented through different strategies, based on dynamic histogram and Hough transform. This paper provides performance and scalability data compared with existing costly commercial devices. Furthermore, a general overview of quality control possibilities for printing plates is presented and could be useful to a system where such controls are regularly conducted. PMID:24284766
Towards a Scalable, Biomimetic, Antibacterial Coating
NASA Astrophysics Data System (ADS)
Dickson, Mary Nora
Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria, without the use of any chemical antibiotic agents. Such nanotopographic coatings can be applied to implantable polymer medical devices with scalable, commercializable processes, and may deter or delay biofilm formation, potentially improving patient outcomes. This thesis also opens the door for adaptation of antibacterial, nanopillared surfaces for other applications including other medical devices, marine applications and environmental surfaces.
Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Patrick G.
2015-02-01
In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.
2016-01-01
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D
2016-07-25
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.
2017-02-01
enable high scalability and reconfigurability for inter-CPU/Memory communications with an increased number of communication channels in frequency ...interconnect technology (MRFI) to enable high scalability and re-configurability for inter-CPU/Memory communications with an increased number of communication ...testing in the University of California, Los Angeles (UCLA) Center for High Frequency Electronics, and Dr. Afshin Momtaz at Broadcom Corporation for
Tradespace and Affordability - Phase 2
2013-12-31
infrastructure capacity. Figure 15 locates the thirteen feasible configurations in survivability- mobility capability space (capability levels are scaled...battery power, or display size decreases. Other quantities may be applicable, such as the number of nodes in a scalable-up mobile network or the...limited size of a scalable-down mobile platform. Versatility involves the range of capabilities provided by a system as it is currently configured. A
U.S. Army Research Laboratory Annual Review 2011
2011-12-01
pioneered a defect reduction process using thermal cycle annealing (TCA) for improving mercury cadmium telluride ( MCT ) grown on scalable silicon (Si...substrates. Currently, the use of MCT -- a mainstay material for Army infrared (IR) systems -- is limited due to high levels of dislocations when...grown on scalable substrates such as Si (an inexpensive substrate material). These dislocations increase pixel noise and limit IR focal plane array
Wavefront control with a spatial light modulator containing dual-frequency liquid crystal
NASA Astrophysics Data System (ADS)
Gu, Dong-Feng; Winker, Bruce; Wen, Bing; Taber, Don; Brackley, Andrew; Wirth, Allan; Albanese, Marc; Landers, Frank
2004-10-01
A versatile, scalable wavefront control approach based upon proven liquid crystal (LC) spatial light modulator (SLM) technology was extended for potential use in high-energy near-infrared laser applications. The reflective LC SLM module demonstrated has a two-inch diameter active aperture with 812 pixels. Using an ultra-low absorption transparent conductor in the LC SLM, a high laser damage threshold was demonstrated. Novel dual frequency liquid crystal materials and addressing schemes were implemented to achieve fast switching speed (<1ms at 1.31 microns). Combining this LCSLM with a novel wavefront sensing method, a closed loop wavefront controller is being demonstrated. Compared to conventional deformable mirrors, this non-mechanical wavefront control approach offers substantial improvements in speed (bandwidth), resolution, power consumption and system weight/volume.
XPRESS: eXascale PRogramming Environment and System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brightwell, Ron; Sterling, Thomas; Koniges, Alice
The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.
2011-01-01
Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol. PMID:21205303
Polymer Nanosheet Containing Star-Like Copolymers: A Novel Scalable Controlled Release System.
Cao, Peng-Fei; de Leon, Al; Rong, Lihan; Yin, Ke-Zhen; Abenojar, Eric C; Su, Zhe; Tiu, Brylee David B; Exner, Agata A; Baer, Eric; Advincula, Rigoberto C
2018-04-26
Poly(ε-caprolactone) (PCL)-based nanomaterials, such as nanoparticles and liposomes, have exhibited great potential as controlled release systems, but the difficulties in large-scale fabrication limit their practical applications. Among the various methods being developed to fabricate polymer nanosheets (PNSs) for different applications, such as Langmuir-Blodgett technique and layer-by-layer assembly, are very effort consuming, and only a few PNSs can be obtained. In this paper, poly(ε-caprolactone)-based PNSs with adjustable thickness are obtained in large quantity by simple water exposure of multilayer polymer films, which are fabricated via a layer multiplying coextrusion method. The PNS is also demonstrated as a novel controlled guest release system, in which release kinetics are adjustable by the nanosheet thickness, pH values of the media, and the presence of protecting layers. Theoretical simulations, including Korsmeyer-Peppas model and Finite-element analysis, are also employed to discern the observed guest-release mechanisms. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
FPGA-Based Optical Cavity Phase Stabilization for Coherent Pulse Stacking
Xu, Yilun; Wilcox, Russell; Byrd, John; ...
2017-11-20
Coherent pulse stacking (CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy from fiber lasers. We develop a robust, scalable, and distributed digital control system with firmware and software integration for algorithms, to support the CPS application. We model CPS as a digital filter in the Z domain and implement a pulse-pattern-based cavity phase detection algorithm on an field-programmable gate array (FPGA). A two-stage (2+1 cavities) 15-pulse stacking system achieves an 11.0 peak-power enhancement factor. Each optical cavity is fed back at 1.5kHz, and stabilized at anmore » individually-prescribed round-trip phase with 0.7deg and 2.1deg rms phase errors for Stages 1 and 2, respectively. Optical cavity phase control with nanometer accuracy ensures 1.2% intensity stability of the stacked pulse over 12 h. The FPGA-based feedback control system can be scaled to large numbers of optical cavities.« less
Controlled release of cavity states into propagating modes induced via a single qubit
NASA Astrophysics Data System (ADS)
Pfaff, Wolfgang; Constantin, Marius; Reagor, Matthew; Axline, Christopher; Blumoff, Jacob; Chou, Kevin; Leghtas, Zaki; Touzard, Steven; Heeres, Reinier; Reinhold, Philip; Ofek, Nissim; Sliwa, Katrina; Frunzio, Luigi; Mirrahimi, Mazyar; Lehnert, Konrad; Jiang, Liang; Devoret, Michel; Schoelkopf, Robert
Photonic states stored in long-lived cavities are a promising platform for scalable quantum computing and for the realization of quantum networks. An important aspect in such a cavity-based architecture will be the controlled conversion of stored photonic states into propagating ones. This will allow, for instance, quantum state transfer between remote cavities. We demonstrate the controlled release of quantum states from a microwave resonator with millisecond lifetime in a 3D circuit QED system. Dispersive coupling of the cavity to a transmon qubit allows us to enable a four-wave mixing process that transfers the stored state into a second resonator from which it can leave the system through a transmission line. This permits us to evacuate the cavity on time scales that are orders of magnitude faster than the intrinsic lifetime. This Q-switching process can in principle be fully coherent, making our system highly promising for quantum state transfer between nodes in a quantum network of high-Q cavities.
FPGA-Based Optical Cavity Phase Stabilization for Coherent Pulse Stacking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yilun; Wilcox, Russell; Byrd, John
Coherent pulse stacking (CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy from fiber lasers. We develop a robust, scalable, and distributed digital control system with firmware and software integration for algorithms, to support the CPS application. We model CPS as a digital filter in the Z domain and implement a pulse-pattern-based cavity phase detection algorithm on an field-programmable gate array (FPGA). A two-stage (2+1 cavities) 15-pulse stacking system achieves an 11.0 peak-power enhancement factor. Each optical cavity is fed back at 1.5kHz, and stabilized at anmore » individually-prescribed round-trip phase with 0.7deg and 2.1deg rms phase errors for Stages 1 and 2, respectively. Optical cavity phase control with nanometer accuracy ensures 1.2% intensity stability of the stacked pulse over 12 h. The FPGA-based feedback control system can be scaled to large numbers of optical cavities.« less
Realizing IoT service's policy privacy over publish/subscribe-based middleware.
Duan, Li; Zhang, Yang; Chen, Shiping; Wang, Shiyao; Cheng, Bo; Chen, Junliang
2016-01-01
The publish/subscribe paradigm makes IoT service collaborations more scalable and flexible, due to the space, time and control decoupling of event producers and consumers. Thus, the paradigm can be used to establish large-scale IoT service communication infrastructures such as Supervisory Control and Data Acquisition systems. However, preserving IoT service's policy privacy is difficult in this paradigm, because a classical publisher has little control of its own event after being published; and a subscriber has to accept all the events from the subscribed event type with no choice. Few existing publish/subscribe middleware have built-in mechanisms to address the above issues. In this paper, we present a novel access control framework, which is capable of preserving IoT service's policy privacy. In particular, we adopt the publish/subscribe paradigm as the IoT service communication infrastructure to facilitate the protection of IoT services policy privacy. The key idea in our policy-privacy solution is using a two-layer cooperating method to match bi-directional privacy control requirements: (a) data layer for protecting IoT events; and (b) application layer for preserving the privacy of service policy. Furthermore, the anonymous-set-based principle is adopted to realize the functionalities of the framework, including policy embedding and policy encoding as well as policy matching. Our security analysis shows that the policy privacy framework is Chosen-Plaintext Attack secure. We extend the open source Apache ActiveMQ broker by building into a policy-based authorization mechanism to enforce the privacy policy. The performance evaluation results indicate that our approach is scalable with reasonable overheads.
Zhang, Jingyuan Linda; Lagoudakis, Konstantinos G.; Tzeng, Yan -Kai; ...
2017-10-23
Arrays of identical and individually addressable qubits lay the foundation for the creation of scalable quantum hardware such as quantum processors and repeaters. Silicon-vacancy (SiV) centers in diamond offer excellent physical properties such as low inhomogeneous broadening, fast photon emission, and a large Debye–Waller factor. The possibility for all-optical ultrafast manipulation and techniques to extend the spin coherence times makes them promising candidates for qubits. Here, we have developed arrays of nanopillars containing single (SiV) centers with high yield, and we demonstrate ultrafast all-optical complete coherent control of the excited state population of a single SiV center at the opticalmore » transition frequency. The high quality of the chemical vapor deposition (CVD) grown SiV centers provides excellent spectral stability, which allows us to coherently manipulate and quasi-resonantly read out the excited state population of individual SiV centers on picosecond timescales using ultrafast optical pulses. Furthermore, this work opens new opportunities to create a scalable on-chip diamond platform for quantum information processing and scalable nanophotonics applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jingyuan Linda; Lagoudakis, Konstantinos G.; Tzeng, Yan -Kai
Arrays of identical and individually addressable qubits lay the foundation for the creation of scalable quantum hardware such as quantum processors and repeaters. Silicon-vacancy (SiV) centers in diamond offer excellent physical properties such as low inhomogeneous broadening, fast photon emission, and a large Debye–Waller factor. The possibility for all-optical ultrafast manipulation and techniques to extend the spin coherence times makes them promising candidates for qubits. Here, we have developed arrays of nanopillars containing single (SiV) centers with high yield, and we demonstrate ultrafast all-optical complete coherent control of the excited state population of a single SiV center at the opticalmore » transition frequency. The high quality of the chemical vapor deposition (CVD) grown SiV centers provides excellent spectral stability, which allows us to coherently manipulate and quasi-resonantly read out the excited state population of individual SiV centers on picosecond timescales using ultrafast optical pulses. Furthermore, this work opens new opportunities to create a scalable on-chip diamond platform for quantum information processing and scalable nanophotonics applications.« less
Rydberg blockade in three-atom systems
NASA Astrophysics Data System (ADS)
Barredo, Daniel; Ravets, Sylvain; Labuhn, Henning; Beguin, Lucas; Vernier, Aline; Chicireanu, Radu; Nogrette, Florence; Lahaye, Thierry; Browaeys, Antoine
2014-05-01
The control of individual neutral atoms in arrays of optical tweezers is a promising avenue for quantum science and technology. Here we demonstrate unprecedented control over a system of three Rydberg atoms arranged in linear and triangular configurations. The interaction between Rydberg atoms results in the observation of an almost perfect van der Waals blockade. When the single-atom Rabi frequency for excitation to the Rydberg state is comparable to the interaction energy, we directly observe the anisotropy of the interaction between nD-states. Using the independently measured two-body interaction energy shifts we fully reproduce the dynamics of the three-atom system with a model based on a master equation without any adjustable parameter. Combined with our ability to trap single atoms in arbitrary patterns of 2D arrays of up to 100 traps separated by a few microns, these results are very promising for a scalable implementation of quantum simulation of frustrated quantum magnetism with Rydberg atoms.
Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan
Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less
Integrated Avionics System (IAS)
NASA Technical Reports Server (NTRS)
Hunter, D. J.
2001-01-01
As spacecraft designs converge toward miniaturization and with the volumetric and mass constraints placed on avionics, programs will continue to advance the 'state of the art' in spacecraft systems development with new challenges to reduce power, mass, and volume. Although new technologies have improved packaging densities, a total system packaging architecture is required that not only reduces spacecraft volume and mass budgets, but increase integration efficiencies, provide modularity and scalability to accommodate multiple missions. With these challenges in mind, a novel packaging approach incorporates solutions that provide broader environmental applications, more flexible system interconnectivity, scalability, and simplified assembly test and integration schemes. This paper will describe the fundamental elements of the Integrated Avionics System (IAS), Horizontally Mounted Cube (HMC) hardware design, system and environmental test results. Additional information is contained in the original extended abstract.
A geometric initial guess for localized electronic orbitals in modular biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P. G.; Fattebert, J. L.; Lau, E. Y.
Recent first-principles molecular dynamics algorithms using localized electronic orbitals have achieved O(N) complexity and controlled accuracy in simulating systems with finite band gaps. However, accurately deter- mining the centers of these localized orbitals during simulation setup may require O(N 3) operations, which is computationally infeasible for many biological systems. We present an O(N) approach for approximating orbital centers in proteins, DNA, and RNA which uses non-localized solutions for a set of fixed-size subproblems to create a set of geometric maps applicable to larger systems. This scalable approach, used as an initial guess in the O(N) first-principles molecular dynamics code MGmol,more » facilitates first-principles simulations in biological systems of sizes which were previously impossible.« less
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
Marshall, Ryan; Maxwell, Colin S; Collins, Scott P; Jacobsen, Thomas; Luo, Michelle L; Begemann, Matthew B; Gray, Benjamin N; January, Emma; Singer, Anna; He, Yonghua; Beisel, Chase L; Noireaux, Vincent
2018-01-04
CRISPR-Cas systems offer versatile technologies for genome engineering, yet their implementation has been outpaced by ongoing discoveries of new Cas nucleases and anti-CRISPR proteins. Here, we present the use of E. coli cell-free transcription-translation (TXTL) systems to vastly improve the speed and scalability of CRISPR characterization and validation. TXTL can express active CRISPR machinery from added plasmids and linear DNA, and TXTL can output quantitative dynamics of DNA cleavage and gene repression-all without protein purification or live cells. We used TXTL to measure the dynamics of DNA cleavage and gene repression for single- and multi-effector CRISPR nucleases, predict gene repression strength in E. coli, determine the specificities of 24 diverse anti-CRISPR proteins, and develop a fast and scalable screen for protospacer-adjacent motifs that was successfully applied to five uncharacterized Cpf1 nucleases. These examples underscore how TXTL can facilitate the characterization and application of CRISPR technologies across their many uses. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kong, Fande; Cai, Xiao-Chuan
2017-07-01
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.
Kong, Fande; Cai, Xiao-Chuan
2017-03-24
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexactmore » Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here ''geometry'' includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.« less
Sun, Gongchen; Senapati, Satyajyoti; Chang, Hsueh-Chia
2016-04-07
A microfluidic ion exchange membrane hybrid chip is fabricated using polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (>100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures
Colli Franzone, Piero; Pavarino, Luca F.; Scacchi, Simone
2018-01-01
We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing) architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1) the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2) the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3) the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4) the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks. PMID:29674971
Entanglement and Metrology with Singlet-Triplet Qubits
NASA Astrophysics Data System (ADS)
Shulman, Michael Dean
Electron spins confined in semiconductor quantum dots are emerging as a promising system to study quantum information science and to perform sensitive metrology. Their weak interaction with the environment leads to long coherence times and robust storage for quantum information, and the intrinsic tunability of semiconductors allows for controllable operations, initialization, and readout of their quantum state. These spin qubits are also promising candidates for the building block for a scalable quantum information processor due to their prospects for scalability and miniaturization. However, several obstacles limit the performance of quantum information experiments in these systems. For example, the weak coupling to the environment makes inter-qubit operations challenging, and a fluctuating nuclear magnetic field limits the performance of single-qubit operations. The focus of this thesis will be several experiments which address some of the outstanding problems in semiconductor spin qubits, in particular, singlet-triplet (S-T0) qubits. We use these qubits to probe both the electric field and magnetic field noise that limit the performance of these qubits. The magnetic noise bath is probed with high bandwidth and precision using novel techniques borrowed from the field of Hamiltonian learning, which are effective due to the rapid control and readout available in S-T 0 qubits. These findings allow us to effectively undo the undesired effects of the fluctuating nuclear magnetic field by tracking them in real-time, and we demonstrate a 30-fold improvement in the coherence time T2*. We probe the voltage noise environment of the qubit using coherent qubit oscillations, which is partially enabled by control of the nuclear magnetic field. We find that the voltage noise bath is frequency-dependent, even at frequencies as high as 1MHz, and it shows surprising and, as of yet, unexplained temperature dependence. We leverage this knowledge of the voltage noise environment, the nuclear magnetic field control, as well as new techniques for calibrated measurement of the density matrix in a singlet-triplet qubit to entangle two adjacent single-triplet qubits. We fully characterize the generated entangled states and prove that they are, indeed, entangled. This work opens new opportunities to use qubits as sensors for improved metrological capabilities, as well as for improved quantum information processing. The singlet-triplet qubit is unique in that it can be used to probe two fundamentally different noise baths, which are important for a large variety of solid state qubits. More specifically, this work establishes the singlet-triplet qubit as a viable candidate for the building block of a scalable quantum information processor.
A latchable thermally activated phase change actuator for microfluidic systems
NASA Astrophysics Data System (ADS)
Richter, Christiane; Sachsenheimer, Kai; Rapp, Bastian E.
2016-03-01
Complex microfluidic systems often require a high number of individually controllable active components like valves and pumps. In this paper we present the development and optimization of a latchable thermally controlled phase change actuator which uses a solid/liquid phase transition of a phase change medium and the displacement of the liquid phase change medium to change and stabilize the two states of the actuator. Because the phase change is triggered by heat produced with ohmic resistors the used control signal is an electrical signal. In contrast to pneumatically activated membrane valves this concept allows the individual control of several dozen actuators with only two external pressure lines. Within this paper we show the general working principle of the actuator and demonstrate its general function and the scalability of the concept at an example of four actuators. Additionally we present the complete results of our studies to optimize the response behavior of the actuator - the influence of the heating power as well as the used phase change medium on melting and solidifying times.
High-fidelity spin entanglement using optimal control.
Dolde, Florian; Bergholm, Ville; Wang, Ya; Jakobi, Ingmar; Naydenov, Boris; Pezzagna, Sébastien; Meijer, Jan; Jelezko, Fedor; Neumann, Philipp; Schulte-Herbrüggen, Thomas; Biamonte, Jacob; Wrachtrup, Jörg
2014-02-28
Precise control of quantum systems is of fundamental importance in quantum information processing, quantum metrology and high-resolution spectroscopy. When scaling up quantum registers, several challenges arise: individual addressing of qubits while suppressing cross-talk, entangling distant nodes and decoupling unwanted interactions. Here we experimentally demonstrate optimal control of a prototype spin qubit system consisting of two proximal nitrogen-vacancy centres in diamond. Using engineered microwave pulses, we demonstrate single electron spin operations with a fidelity F≈0.99. With additional dynamical decoupling techniques, we further realize high-quality, on-demand entangled states between two electron spins with F>0.82, mostly limited by the coherence time and imperfect initialization. Crosstalk in a crowded spectrum and unwanted dipolar couplings are simultaneously eliminated to a high extent. Finally, by high-fidelity entanglement swapping to nuclear spin quantum memory, we demonstrate nuclear spin entanglement over a length scale of 25 nm. This experiment underlines the importance of optimal control for scalable room temperature spin-based quantum information devices.
Scalability of Robotic Controllers: Speech-Based Robotic Controller Evaluation
2009-06-01
treated confidentially. Roster Number: __________________________ Date: _________________________ 1. Do you have any physical injury at the...any visual problems which you may have: Astigmatism (1) . 6. What is your height? 69 inches (mean) (range 60-74) 7. What is your
MOEMs devices for future astronomical instrumentation in space
NASA Astrophysics Data System (ADS)
Zamkotsian, Frédéric; Liotard, Arnaud; Lanzoni, Patrick; ElHadi, Kacem; Waldis, Severin; Noell, Wilfried; de Rooij, Nico; Conedera, Veronique; Fabre, Norbert; Muratet, Sylvaine; Camon, Henri
2017-11-01
Based on the micro-electronics fabrication process, Micro-Opto-Electro-Mechanical Systems (MOEMS) are under study in order to be integrated in next-generation astronomical instruments for ground-based and space telescopes. Their main advantages are their compactness, scalability, specific task customization using elementary building blocks, and remote control. At Laboratoire d'Astrophysique de Marseille, we are engaged since several years in the design, realization and characterization of programmable slit masks for multi-object spectroscopy and micro-deformable mirrors for wavefront correction. First prototypes have been developed and show results matching with the requirements.
Thermodynamic effects of single-qubit operations in silicon-based quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lougovski, Pavel; Peters, Nicholas A.
Silicon-based quantum logic is a promising technology to implement universal quantum computing. It is widely believed that a millikelvin cryogenic environment will be necessary to accommodate silicon-based qubits. This prompts a question of the ultimate scalability of the technology due to finite cooling capacity of refrigeration systems. In this work, we answer this question by studying energy dissipation due to interactions between nuclear spin impurities and qubit control pulses. Furthermore, we demonstrate that this interaction constrains the sustainable number of single-qubit operations per second for a given cooling capacity.
Biotechnological synthesis of functional nanomaterials.
Lloyd, Jonathan R; Byrne, James M; Coker, Victoria S
2011-08-01
Biological systems, especially those using microorganisms, have the potential to offer cheap, scalable and highly tunable green synthetic routes for the production of the latest generation of nanomaterials. Recent advances in the biotechnological synthesis of functional nano-scale materials are described. These nanomaterials range from catalysts to novel inorganic antimicrobials, nanomagnets, remediation agents and quantum dots for electronic and optical devices. Where possible, the roles of key biological macromolecules in controlling production of the nanomaterials are highlighted, and also technological limitations that must be addressed for widespread implementation are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Thermodynamic effects of single-qubit operations in silicon-based quantum computing
Lougovski, Pavel; Peters, Nicholas A.
2018-05-21
Silicon-based quantum logic is a promising technology to implement universal quantum computing. It is widely believed that a millikelvin cryogenic environment will be necessary to accommodate silicon-based qubits. This prompts a question of the ultimate scalability of the technology due to finite cooling capacity of refrigeration systems. In this work, we answer this question by studying energy dissipation due to interactions between nuclear spin impurities and qubit control pulses. Furthermore, we demonstrate that this interaction constrains the sustainable number of single-qubit operations per second for a given cooling capacity.
Nanotechnology Presentation Agenda
NASA Technical Reports Server (NTRS)
2005-01-01
Working at the atomic, molecular and supra-molecular levels, in the length scale of approximately 1 - 100 nm range, in order to understand, create and use materials, devices and systems with fundamentally new properties and functions because of their small structure. NNI definition encourages new contributions that were not possible.before. Novel phenomena, properties and functions at nanoscale,which are non scalable outside of the nm domain. The ability to measure / control / manipulate matter at the nanoscale in order to change those properties and functions. Integration along length scales, and fields of application.
Fully decentralized estimation and control for a modular wheeled mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutambara, A.G.O.; Durrant-Whyte, H.F.
2000-06-01
In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
High-performance metadata indexing and search in petascale data storage systems
NASA Astrophysics Data System (ADS)
Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.
2008-07-01
Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.
Scalable Integrated Multi-Mission Support System Simulator Release 3.0
NASA Technical Reports Server (NTRS)
Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis
2012-01-01
The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.
Object-oriented integrated approach for the design of scalable ECG systems.
Boskovic, Dusanka; Besic, Ingmar; Avdagic, Zikrija
2009-01-01
The paper presents the implementation of Object-Oriented (OO) integrated approaches to the design of scalable Electro-Cardio-Graph (ECG) Systems. The purpose of this methodology is to preserve real-world structure and relations with the aim to minimize the information loss during the process of modeling, especially for Real-Time (RT) systems. We report on a case study of the design that uses the integration of OO and RT methods and the Unified Modeling Language (UML) standard notation. OO methods identify objects in the real-world domain and use them as fundamental building blocks for the software system. The gained experience based on the strongly defined semantics of the object model is discussed and related problems are analyzed.
A scalable and practical one-pass clustering algorithm for recommender system
NASA Astrophysics Data System (ADS)
Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali
2015-12-01
KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.
Data-driven gradient algorithm for high-precision quantum control
NASA Astrophysics Data System (ADS)
Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel
2018-04-01
In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.
Design and implementation of workflow engine for service-oriented architecture
NASA Astrophysics Data System (ADS)
Peng, Shuqing; Duan, Huining; Chen, Deyun
2009-04-01
As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.
Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce
Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng
2016-01-01
The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325
NASA Astrophysics Data System (ADS)
Baynes, K.; Gilman, J.; Pilone, D.; Mitchell, A. E.
2015-12-01
The NASA EOSDIS (Earth Observing System Data and Information System) Common Metadata Repository (CMR) is a continuously evolving metadata system that merges all existing capabilities and metadata from EOS ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) systems. This flagship catalog has been developed with several key requirements: fast search and ingest performance ability to integrate heterogenous external inputs and outputs high availability and resiliency scalability evolvability and expandability This talk will focus on the advantages and potential challenges of tackling these requirements using a microservices architecture, which decomposes system functionality into smaller, loosely-coupled, individually-scalable elements that communicate via well-defined APIs. In addition, time will be spent examining specific elements of the CMR architecture and identifying opportunities for future integrations.
A Comparison of Different Database Technologies for the CMS AsyncStageOut Transfer Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciangottini, D.; Balcas, J.; Mascheroni, M.
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses amore » NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.« less
A comparison of different database technologies for the CMS AsyncStageOut transfer database
NASA Astrophysics Data System (ADS)
Ciangottini, D.; Balcas, J.; Mascheroni, M.; Rupeika, E. A.; Vaandering, E.; Riahi, H.; Silva, J. M. D.; Hernandez, J. M.; Belforte, S.; Ivanov, T. T.
2017-10-01
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.
Innovation for integrated command environments
NASA Astrophysics Data System (ADS)
Perry, Amie A.; McKneely, Jennifer A.
2000-11-01
Command environments have rarely been able to easily accommodate rapid changes in technology and mission. Yet, command personnel, by their selection criteria, experience, and very nature, tend to be extremely adaptive and flexible, and able to learn new missions and address new challenges fairly easily. Instead, the hardware and software components of the systems do no provide the needed flexibility and scalability for command personnel. How do we solve this problem? In order to even dream of keeping pace with a rapidly changing world, we must begin to think differently about the command environment and its systems. What is the correct definition of the integrated command environment system? What types of tasks must be performed in this environment, and how might they change in the next five to twenty-five years? How should the command environment be developed, maintained, and evolved to provide needed flexibility and scalability? The issues and concepts to be considered as new Integrated Command/Control Environments (ICEs) are designed following a human-centered process. A futuristic model, the Dream Integrated Command Environment (DICE) will be described which demonstrates specific ICE innovations. The major paradigm shift required to be able to think differently about this problem is to center the DICE around the command personnel from its inception. Conference participants may not agree with every concept or idea presented, but will hopefully come away with a clear understanding that to radically improve future systems, designers must focus on the end users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ying-Jie, E-mail: yingjiezhang@qfnu.edu.cn; Han, Wei; Xia, Yun-Jie, E-mail: yjxia@qfnu.edu.cn
We propose a scheme of controlling entanglement dynamics of a quantum system by applying the external classical driving field for two atoms separately located in a single-mode photon cavity. It is shown that, with a judicious choice of the classical-driving strength and the atom–photon detuning, the effective atom–photon interaction Hamiltonian can be switched from Jaynes–Cummings model to anti-Jaynes–Cummings model. By tuning the controllable atom–photon interaction induced by the classical field, we illustrate that the evolution trajectory of the Bell-like entanglement states can be manipulated from entanglement-sudden-death to no-entanglement-sudden-death, from no-entanglement-invariant to entanglement-invariant. Furthermore, the robustness of the initial Bell-like entanglementmore » can be improved by the classical driving field in the leaky cavities. This classical-driving-assisted architecture can be easily extensible to multi-atom quantum system for scalability.« less
Artificially Engineered Protein Polymers.
Yang, Yun Jung; Holmberg, Angela L; Olsen, Bradley D
2017-06-07
Modern polymer science increasingly requires precise control over macromolecular structure and properties for engineering advanced materials and biomedical systems. The application of biological processes to design and synthesize artificial protein polymers offers a means for furthering macromolecular tunability, enabling polymers with dispersities of ∼1.0 and monomer-level sequence control. Taking inspiration from materials evolved in nature, scientists have created modular building blocks with simplified monomer sequences that replicate the function of natural systems. The corresponding protein engineering toolbox has enabled the systematic development of complex functional polymeric materials across areas as diverse as adhesives, responsive polymers, and medical materials. This review discusses the natural proteins that have inspired the development of key building blocks for protein polymer engineering and the function of these elements in material design. The prospects and progress for scalable commercialization of protein polymers are reviewed, discussing both technology needs and opportunities.
Design and Control of Large Collections of Learning Agents
NASA Technical Reports Server (NTRS)
Agogino, Adrian
2001-01-01
The intelligent control of multiple autonomous agents is an important yet difficult task. Previous methods used to address this problem have proved to be either too brittle, too hard to use, or not scalable to large systems. The 'Collective Intelligence' project at NASA/Ames provides an elegant, machine-learning approach to address these problems. This approach mathematically defines some essential properties that a reward system should have to promote coordinated behavior among reinforcement learners. This work has focused on creating additional key properties and algorithms within the mathematics of the Collective Intelligence framework. One of the additions will allow agents to learn more quickly, in a more coordinated manner. The other will let agents learn with less knowledge of their environment. These additions will allow the framework to be applied more easily, to a much larger domain of multi-agent problems.
High frequency signal acquisition and control system based on DSP+FPGA
NASA Astrophysics Data System (ADS)
Liu, Xiao-qi; Zhang, Da-zhi; Yin, Ya-dong
2017-10-01
This paper introduces a design and implementation of high frequency signal acquisition and control system based on DSP + FPGA. The system supports internal/external clock and internal/external trigger sampling. It has a maximum sampling rate of 400MBPS and has a 1.4GHz input bandwidth for the ADC. Data can be collected continuously or periodically in systems and they are stored in DDR2. At the same time, the system also supports real-time acquisition, the collected data after digital frequency conversion and Cascaded Integrator-Comb (CIC) filtering, which then be sent to the CPCI bus through the high-speed DSP, can be assigned to the fiber board for subsequent processing. The system integrates signal acquisition and pre-processing functions, which uses high-speed A/D, high-speed DSP and FPGA mixed technology and has a wide range of uses in data acquisition and recording. In the signal processing, the system can be seamlessly connected to the dedicated processor board. The system has the advantages of multi-selectivity, good scalability and so on, which satisfies the different requirements of different signals in different projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Efficient scalable solid-state neutron detector.
Moses, Daniel
2015-06-01
We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a (6)Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m(2), is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.
Breaking BAD: A Data Serving Vision for Big Active Data
Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.
2017-01-01
Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377
McEwan, Reed; Melton, Genevieve B; Knoll, Benjamin C; Wang, Yan; Hultman, Gretchen; Dale, Justin L; Meyer, Tim; Pakhomov, Serguei V
2016-01-01
Many design considerations must be addressed in order to provide researchers with full text and semantic search of unstructured healthcare data such as clinical notes and reports. Institutions looking at providing this functionality must also address the big data aspects of their unstructured corpora. Because these systems are complex and demand a non-trivial investment, there is an incentive to make the system capable of servicing future needs as well, further complicating the design. We present architectural best practices as lessons learned in the design and implementation NLP-PIER (Patient Information Extraction for Research), a scalable, extensible, and secure system for processing, indexing, and searching clinical notes at the University of Minnesota.
Corrective Control to Handle Forecast Uncertainty: A Chance Constrained Optimal Power Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roald, Line; Misra, Sidhant; Krause, Thilo
Higher shares of electricity generation from renewable energy sources and market liberalization is increasing uncertainty in power systems operation. At the same time, operation is becoming more flexible with improved control systems and new technology such as phase shifting transformers (PSTs) and high voltage direct current connections (HVDC). Previous studies have shown that the use of corrective control in response to outages contributes to a reduction in operating cost, while maintaining N-1 security. In this work, we propose a method to extend the use of corrective control of PSTs and HVDCs to react to uncertainty. We characterize the uncertainty asmore » continuous random variables, and define the corrective control actions through affine control policies. This allows us to efficiently model control reactions to a large number of uncertainty sources. The control policies are then included in a chance constrained optimal power flow formulation, which guarantees that the system constraints are enforced with a desired probability. Lastly, by applying an analytical reformulation of the chance constraints, we obtain a second-order cone problem for which we develop an efficient solution algorithm. In a case study for the IEEE 118 bus system, we show that corrective control for uncertainty leads to a decrease in operational cost, while maintaining system security. Further, we demonstrate the scalability of the method by solving the problem for the IEEE 300 bus and the Polish system test cases.« less
Corrective Control to Handle Forecast Uncertainty: A Chance Constrained Optimal Power Flow
Roald, Line; Misra, Sidhant; Krause, Thilo; ...
2016-08-25
Higher shares of electricity generation from renewable energy sources and market liberalization is increasing uncertainty in power systems operation. At the same time, operation is becoming more flexible with improved control systems and new technology such as phase shifting transformers (PSTs) and high voltage direct current connections (HVDC). Previous studies have shown that the use of corrective control in response to outages contributes to a reduction in operating cost, while maintaining N-1 security. In this work, we propose a method to extend the use of corrective control of PSTs and HVDCs to react to uncertainty. We characterize the uncertainty asmore » continuous random variables, and define the corrective control actions through affine control policies. This allows us to efficiently model control reactions to a large number of uncertainty sources. The control policies are then included in a chance constrained optimal power flow formulation, which guarantees that the system constraints are enforced with a desired probability. Lastly, by applying an analytical reformulation of the chance constraints, we obtain a second-order cone problem for which we develop an efficient solution algorithm. In a case study for the IEEE 118 bus system, we show that corrective control for uncertainty leads to a decrease in operational cost, while maintaining system security. Further, we demonstrate the scalability of the method by solving the problem for the IEEE 300 bus and the Polish system test cases.« less
Scalability of transport parameters with pore sizes in isodense disordered media
NASA Astrophysics Data System (ADS)
Reginald, S. William; Schmitt, V.; Vallée, R. A. L.
2014-09-01
We study light multiple scattering in complex disordered porous materials. High internal phase emulsion-based isodense polystyrene foams are designed. Two types of samples, exhibiting different pore size distributions, are investigated for different slab thicknesses varying from L = 1 \\text{mm} to 10 \\text{mm} . Optical measurements combining steady-state and time-resolved detection are used to characterize the photon transport parameters. Very interestingly, a clear scalability of the transport mean free path \\ellt with the average size of the pores S is observed, featuring a constant velocity of the transport energy in these isodense structures. This study strongly motivates further investigations into the limits of validity of this scalability as the scattering strength of the system increases.
Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B
2013-02-01
This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Deng, Fu-Guo
2014-12-01
Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.
Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.
2013-01-01
Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567
Wei, Hai-Rui; Deng, Fu-Guo
2014-12-18
Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.
Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G
2013-01-01
Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tock, Yoav; Mandler, Benjamin; Moreira, Jose
2013-01-01
As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliencymore » and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.« less
Sun, Gongchen; Senapati, Satyajyoti
2016-01-01
A microfluidic-ion exchange membrane hybrid chip is fabricated by polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (> 100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems. PMID:26960551
2012-04-21
the photoelectric effect. The typical shortest wavelengths needed for ion traps range from 194 nm for Hg+ to 493 nm for Ba +, corresponding to 6.4-2.5...REPORT Comprehensive Materials and Morphologies Study of Ion Traps (COMMIT) for scalable Quantum Computation - Final Report 14. ABSTRACT 16. SECURITY...CLASSIFICATION OF: Trapped ion systems, are extremely promising for large-scale quantum computation, but face a vexing problem, with motional quantum
NASA Astrophysics Data System (ADS)
Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry
2006-12-01
We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erez, Mattan; Yelick, Katherine; Sarkar, Vivek
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
Integrated generation of complex optical quantum states and their coherent control
NASA Astrophysics Data System (ADS)
Roztocki, Piotr; Kues, Michael; Reimer, Christian; Romero Cortés, Luis; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2018-01-01
Complex optical quantum states based on entangled photons are essential for investigations of fundamental physics and are the heart of applications in quantum information science. Recently, integrated photonics has become a leading platform for the compact, cost-efficient, and stable generation and processing of optical quantum states. However, onchip sources are currently limited to basic two-dimensional (qubit) two-photon states, whereas scaling the state complexity requires access to states composed of several (<2) photons and/or exhibiting high photon dimensionality. Here we show that the use of integrated frequency combs (on-chip light sources with a broad spectrum of evenly-spaced frequency modes) based on high-Q nonlinear microring resonators can provide solutions for such scalable complex quantum state sources. In particular, by using spontaneous four-wave mixing within the resonators, we demonstrate the generation of bi- and multi-photon entangled qubit states over a broad comb of channels spanning the S, C, and L telecommunications bands, and control these states coherently to perform quantum interference measurements and state tomography. Furthermore, we demonstrate the on-chip generation of entangled high-dimensional (quDit) states, where the photons are created in a coherent superposition of multiple pure frequency modes. Specifically, we confirm the realization of a quantum system with at least one hundred dimensions. Moreover, using off-the-shelf telecommunications components, we introduce a platform for the coherent manipulation and control of frequencyentangled quDit states. Our results suggest that microcavity-based entangled photon state generation and the coherent control of states using accessible telecommunications infrastructure introduce a powerful and scalable platform for quantum information science.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
Pace: Privacy-Protection for Access Control Enforcement in P2P Networks
NASA Astrophysics Data System (ADS)
Sánchez-Artigas, Marc; García-López, Pedro
In open environments such as peer-to-peer (P2P) systems, the decision to collaborate with multiple users — e.g., by granting access to a resource — is hard to achieve in practice due to extreme decentralization and the lack of trusted third parties. The literature contains a plethora of applications in which a scalable solution for distributed access control is crucial. This fact motivates us to propose a protocol to enforce access control, applicable to networks consisting entirely of untrusted nodes. The main feature of our protocol is that it protects both sensitive permissions and sensitive policies, and does not rely on any centralized authority. We analyze the efficiency (computational effort and communication overhead) as well as the security of our protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in reconfigurable network enclaving through Software Defined Networking (SDN) and Network Function Virtualization (NFV) and their applicability to secure enclaves in HPC environments. SDN and NFV methods are based on a solid foundation of system wide virtualization. The purpose of which is very straight forward, the system administrator can deploy networks that are more amenable to customer needs, and at the same time achieve increased scalability making it easier to increase overall capacity as needed without negatively affecting functionality. The network administration of both the server system and the virtual sub-systems is simplified allowing control of the infrastructure through well-defined APIs (Application Programming Interface). While SDN and NFV technologies offer significant promise in meeting these goals, they also provide the ability to address a significant component of the multi-tenant challenge in HPC environments, namely resource isolation. Traditional HPC systems are built upon scalable high-performance networking technologies designed to meet specific application requirements. Dynamic isolation of resources within these environments has remained difficult to achieve. SDN and NFV methodology provide us with relevant concepts and available open standards based APIs that isolate compute and storage resources within an otherwise common networking infrastructure. Additionally, the integration of the networking APIs within larger system frameworks such as OpenStack provide the tools necessary to establish isolated enclaves dynamically allowing the benefits of HPC while providing a controlled security structure surrounding these systems.« less
High Available COTS Based Computer for Space
NASA Astrophysics Data System (ADS)
Hartmann, J.; Magistrati, Giorgio
2015-09-01
The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.
Non-mechanical beam control for entry, descent and landing laser radar (Conference Presentation)
NASA Astrophysics Data System (ADS)
Stockley, Jay E.; Kluttz, Kelly; Hosting, Lance; Serati, Steve; Bradley, Cullen P.; McManamon, Paul F.; Amzajerdian, Farzin
2017-05-01
Laser radar for entry, descent, and landing (EDL) applications as well as the space docking problem could benefit from a low size, weight, and power (SWaP) beam control system. Moreover, an inertia free approach employing non-mechanical beam control is also attractive for laser radar that is intended to be employed aboard space platforms. We are investigating a non-mechanical beam steering (NMBS) sub-system based on liquid crystal polarization grating (LCPG) technology with emphasis placed on improved throughput and significant weight reduction by combining components and drastically reducing substrate thicknesses. In addition to the advantages of non-mechanical, gimbal free beam control, and greatly improved SWaP, our approach also enables wide area scanning using a scalable architecture. An extraterrestrial application entails additional environmental constraints, consequently an environmental test plan tailored to an EDL mission will also be discussed. In addition, we will present advances in continuous fine steering technology which would complement the coarse steering LCPG technology. A low-SWaP, non-mechanical beam control system could be used in many laser radar remote sensing applications including meteorological studies and agricultural or environmental surveys in addition to the entry, descent, and landing application.
Integrated Power and Attitude Control System (IPACS)
NASA Technical Reports Server (NTRS)
Michaelis, Theodore D.
1998-01-01
Recent advances in materials, circuit integration and power switching have given the concept of dynamic energy and momentum storage important weight size, and operational advantages over the conventional momentum wheel-battery configuration. Simultaneous momentum and energy storage for a three axes stabilized spacecraft can be accomplished with a topology of at least four wheels where energy (a scalar) is stored or retrieved in such a manner as to keep the momentum vector invariant. This study, instead, considers the case of two counter-rotating wheels in one axis to more effectively portray the principles involved. General scalable system design equations are derived which demonstrate the role of momentum storage when combined with energy storage.
A synchronization method for wireless acquisition systems, application to brain computer interfaces.
Foerster, M; Bonnet, S; van Langhenhove, A; Porcherot, J; Charvet, G
2013-01-01
A synchronization method for wireless acquisition systems has been developed and implemented on a wireless ECoG recording implant and on a wireless EEG recording helmet. The presented algorithm and hardware implementation allow the precise synchronization of several data streams from several sensor nodes for applications where timing is critical like in event-related potential (ERP) studies. The proposed method has been successfully applied to obtain visual evoked potentials and compared with a reference biosignal amplifier. The control over the exact sampling frequency allows reducing synchronization errors that will otherwise accumulate during a recording. The method is scalable to several sensor nodes communicating with a shared base station.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob
Electricity generated by Hydropower Plants (HPPs) contributes a considerable portion of bulk electricity generation and delivers it with a low carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which includes solar and wind energy. The increasing penetration of wind and solar penetration leads to a lowered inertia in the grid and hence poses stability challenges. In recent years, breakthrough in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments in power grids. Multiple ROR HPPs if integrated with scalable, multi time-step energy storage so that the total output canmore » be controlled. Although, the size of a single energy storage is far smaller than that of a typical reservoir, cohesively managing multiple sets of energy storage distributed in different locations is proposed. The ratings of storages and multiple ROR HPPs approximately equals the rating of a large, conventional HPP. The challenges associated with the system architecture and operation are described. Energy storage technologies such as supercapacitors, flywheels, batteries etc. can function as a dispatchable synthetic reservoir with a scalable size of energy storage will be integrated. Supercapacitors, flywheels, and battery are chosen to provide fast, medium, and slow responses to support grid requirements. Various dynamic and transient power grid conditions are simulated and performances of integrated ROR HPPs with energy storage is provided. The end goal of this research is to investigate the inertial equivalence of a large, conventional HPP with a unique set of multiple ROR HPPs and optimally rated energy storage systems.« less
Scalable hydrothermal synthesis of free-standing VO₂ nanowires in the M1 phase.
Horrocks, Gregory A; Singh, Sujay; Likely, Maliek F; Sambandamurthy, G; Banerjee, Sarbajit
2014-09-24
VO2 nanostructures derived from solution-phase methods are often plagued by broadened and relatively diminished metal-insulator transitions and adventitious doping due to imperfect control of stoichiometry. Here, we demonstrate a stepwise scalable hydrothermal and annealing route for obtaining VO2 nanowires exhibiting almost 4 orders of magnitude abrupt (within 1 °C) metal-insulator transitions. The prepared nanowires have been characterized across their structural and electronic phase transitions using single-nanowire Raman microprobe analysis, ensemble differential scanning calorimetry, and single-nanowire electrical transport measurements. The electrical band gap is determined to be 600 meV and is consistent with the optical band gap of VO2, and the narrowness of differential scanning calorimetry profiles indicates homogeneity of stoichiometry. The preparation of high-quality free-standing nanowires exhibiting pronounced metal-insulator transitions by a solution-phase process allows for scalability, further solution-phase processing, incorporation within nanocomposites, and integration onto arbitrary substrates.
Dynamic full-scalability conversion in scalable video coding
NASA Astrophysics Data System (ADS)
Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man
2007-02-01
For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.
A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.
Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.
Chai, Zhimin; Abbasi, Salman A; Busnaina, Ahmed A
2018-05-30
Assembly of organic semiconductors with ordered crystal structure has been actively pursued for electronics applications such as organic field-effect transistors (OFETs). Among various film deposition methods, solution-based film growth from small molecule semiconductors is preferable because of its low material and energy consumption, low cost, and scalability. Here, we show scalable and controllable directed assembly of highly crystalline 2,7-dioctyl[1]benzothieno[3,2- b][1]benzothiophene (C8-BTBT) films via a dip-coating process. Self-aligned stripe patterns with tunable thickness and morphology over a centimeter scale are obtained by adjusting two governing parameters: the pulling speed of a substrate and the solution concentration. OFETs are fabricated using the C8-BTBT films assembled at various conditions. A field-effect hole mobility up to 3.99 cm 2 V -1 s -1 is obtained. Owing to the highly scalable crystalline film formation, the dip-coating directed assembly process could be a great candidate for manufacturing next-generation electronics. Meanwhile, the film formation mechanism discussed in this paper could provide a general guideline to prepare other organic semiconducting films from small molecule solutions.
A repeatable and scalable fabrication method for sharp, hollow silicon microneedles
NASA Astrophysics Data System (ADS)
Kim, H.; Theogarajan, L. S.; Pennathur, S.
2018-03-01
Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.
Emergent Adaptive Noise Reduction from Communal Cooperation of Sensor Grid
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Jones, Michael G.; Nark, Douglas M.; Lodding, Kenneth N.
2010-01-01
In the last decade, the realization of small, inexpensive, and powerful devices with sensors, computers, and wireless communication has promised the development of massive sized sensor networks with dense deployments over large areas capable of high fidelity situational assessments. However, most management models have been based on centralized control and research has concentrated on methods for passing data from sensor devices to the central controller. Most implementations have been small but, as it is not scalable, this methodology is insufficient for massive deployments. Here, a specific application of a large sensor network for adaptive noise reduction demonstrates a new paradigm where communities of sensor/computer devices assess local conditions and make local decisions from which emerges a global behaviour. This approach obviates many of the problems of centralized control as it is not prone to single point of failure and is more scalable, efficient, robust, and fault tolerant
Sarkar, Bidyut K; Shahab, Lion; Arora, Monika; Lorencatto, Fabiana; Reddy, K Srinath; West, Robert
2014-03-01
India has 275 million adult tobacco users and tobacco use is estimated to contribute to more than a million deaths in the country each year. There is an urgent need to develop and evaluate affordable, practicable and scalable interventions to promote cessation of tobacco use. Because tobacco use is so harmful, an increase of as little as 1 percentage point in long-term quit success rates can have an important public health impact. This protocol paper describes the rationale and methods of a large randomized controlled trial which aims to evaluate the effectiveness of a brief scalable smoking cessation intervention delivered by trained health professionals as an outreach programme in poor urban communities in India. This is a pragmatic, two-arm, community-based cluster randomized controlled trial focused on tobacco users in low-income communities. The treatment arm is a brief intervention comprising brief advice including training in craving control using simple yogic breathing exercises (BA-YBA) and the control arm is very brief advice (VBA). Of a total of 32 clusters, 16 will be allocated to the intervention arm and 16 to the control arm. Each cluster will have 31 participants, making a total of 992 participants. The primary outcome measure will follow the Russell Standard: self-report of sustained abstinence for at least 6 months following the intervention confirmed at the final follow-up by salivary cotinine. This trial will inform national and international policy on delivery of scalable and affordable brief outreach interventions to promote tobacco use cessation in low resource settings where tobacco users have limited access to physicians and medications. © 2014 Society for the Study of Addiction.
Engineering scalable biological systems
2010-01-01
Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204
The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.
Towards a Functionally-Formed Air Traffic System-of-Systems
NASA Technical Reports Server (NTRS)
Conway, Sheila R.; Consiglio, Maria C.
2005-01-01
Incremental improvements to the national aviation infrastructure have not resulted in sufficient increases in capacity and flexibility to meet emerging demand. Unfortunately, revolutionary changes capable of substantial and rapid increases in capacity have proven elusive. Moreover, significant changes have been difficult to implement, and the operational consequences of such change, difficult to predict due to the system s complexity. Some research suggests redistributing air traffic control functions through the system, but this work has largely been dismissed out of hand, accused of being impractical. However, the case for functionally-based reorganization of form can be made from a theoretical, systems perspective. This paper investigates Air Traffic Management functions and their intrinsic biases towards centralized/distributed operations, grounded in systems engineering and information technology theories. Application of these concepts to a small airport operations design is discussed. From this groundwork, a robust, scalable system transformation plan may be made in light of uncertain demand.
Conceptual model of knowledge base system
NASA Astrophysics Data System (ADS)
Naykhanova, L. V.; Naykhanova, I. V.
2018-05-01
In the article, the conceptual model of the knowledge based system by the type of the production system is provided. The production system is intended for automation of problems, which solution is rigidly conditioned by the legislation. A core component of the system is a knowledge base. The knowledge base consists of a facts set, a rules set, the cognitive map and ontology. The cognitive map is developed for implementation of a control strategy, ontology - the explanation mechanism. Knowledge representation about recognition of a situation in the form of rules allows describing knowledge of the pension legislation. This approach provides the flexibility, originality and scalability of the system. In the case of changing legislation, it is necessary to change the rules set. This means that the change of the legislation would not be a big problem. The main advantage of the system is that there is an opportunity to be adapted easily to changes of the legislation.
NASA Technical Reports Server (NTRS)
2006-01-01
The Global Change Master Directory (GCMD) has been one of the best known Earth science and global change data discovery online resources throughout its extended operational history. The growing popularity of the system since its introduction on the World Wide Web in 1994 has created an environment where resolving issues of scalability, security, and interoperability have been critical to providing the best available service to the users and partners of the GCMD. Innovative approaches developed at the GCMD in these areas will be presented with a focus on how they relate to current and future GO-ESSP community needs.
Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B.; Perumalla, Kalyan S.
Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less
Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids
Yoginath, Srikanth B.; Perumalla, Kalyan S.
2018-01-31
Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less
Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis
NASA Astrophysics Data System (ADS)
Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.
2012-04-01
The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and decrease the ESB resources on demand. To reify this hybrid ESB centric architecture, we will adopt two complementary approaches: an open source one for scalability and resilience improvement while a commercial one can be used for ultra-speed messaging, whilst we can bridge between these two to support interoperability. In TRIDEC, to manage such a hybrid messaging system, overlay and underlay management techniques will be adopted. The managers (both global and local) will collect, store and update status information (e.g. CPU utilization, free space, number of clients) and balance the usage, throughput, and delays to improve resilience and scalability. The expected resilience improvement includes dynamic failover, self-healing, pre-emptive load balancing, and bottleneck prediction while the expected improvement for scalability includes capacity estimation, Http Bridge, and automatic configuration and reconfiguration (e.g. add or delete clients and servers).
Minimizing communication cost among distributed controllers in software defined networks
NASA Astrophysics Data System (ADS)
Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed
2016-08-01
Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.
17 CFR 38.1050 - Core Principle 20.
Code of Federal Regulations, 2014 CFR
2014-04-01
... automated systems, that are reliable, secure, and have adequate scalable capacity; (b) Establish and... CONTRACT MARKETS System Safeguards § 38.1050 Core Principle 20. Each designated contract market shall: (a...
17 CFR 38.1050 - Core Principle 20.
Code of Federal Regulations, 2013 CFR
2013-04-01
... automated systems, that are reliable, secure, and have adequate scalable capacity; (b) Establish and... CONTRACT MARKETS System Safeguards § 38.1050 Core Principle 20. Each designated contract market shall: (a...
Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, Lee M; Sheldon, Frederick T
The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-01-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-08-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
Bubble pump: scalable strategy for in-plane liquid routing.
Oskooei, Ali; Günther, Axel
2015-07-07
We present an on-chip liquid routing technique intended for application in well-based microfluidic systems that require long-term active pumping at low to medium flowrates. Our technique requires only one fluidic feature layer, one pneumatic control line and does not rely on flexible membranes and mechanical or moving parts. The presented bubble pump is therefore compatible with both elastomeric and rigid substrate materials and the associated scalable manufacturing processes. Directed liquid flow was achieved in a microchannel by an in-series configuration of two previously described "bubble gates", i.e., by gas-bubble enabled miniature gate valves. Only one time-dependent pressure signal is required and initiates at the upstream (active) bubble gate a reciprocating bubble motion. Applied at the downstream (passive) gate a time-constant gas pressure level is applied. In its rest state, the passive gate remains closed and only temporarily opens while the liquid pressure rises due to the active gate's reciprocating bubble motion. We have designed, fabricated and consistently operated our bubble pump with a variety of working liquids for >72 hours. Flow rates of 0-5.5 μl min(-1), were obtained and depended on the selected geometric dimensions, working fluids and actuation frequencies. The maximum operational pressure was 2.9 kPa-9.1 kPa and depended on the interfacial tension of the working fluids. Attainable flow rates compared favorably with those of available micropumps. We achieved flow rate enhancements of 30-100% by operating two bubble pumps in tandem and demonstrated scalability of the concept in a multi-well format with 12 individually and uniformly perfused microchannels (variation in flow rate <7%). We envision the demonstrated concept to allow for the consistent on-chip delivery of a wide range of different liquids that may even include highly reactive or moisture sensitive solutions. The presented bubble pump may provide active flow control for analytical and point-of-care diagnostic devices, as well as for microfluidic cells culture and organ-on-chip platforms.
Chelonia: A self-healing, replicated storage system
NASA Astrophysics Data System (ADS)
Kerr Nilsen, Jon; Toor, Salman; Nagy, Zsombor; Read, Alex
2011-12-01
Chelonia is a novel grid storage system designed to fill the requirements gap between those of large, sophisticated scientific collaborations which have adopted the grid paradigm for their distributed storage needs, and of corporate business communities gravitating towards the cloud paradigm. Chelonia is an integrated system of heterogeneous, geographically dispersed storage sites which is easily and dynamically expandable and optimized for high availability and scalability. The architecture and implementation in term of web-services running inside the Advanced Resource Connector Hosting Environment Dameon (ARC HED) are described and results of tests in both local -area and wide-area networks that demonstrate the fault tolerance, stability and scalability of Chelonia will be presented. In addition, example setups for production deployments for small and medium-sized VO's are described.
Towards scalable Byzantine fault-tolerant replication
NASA Astrophysics Data System (ADS)
Zbierski, Maciej
2017-08-01
Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.
McEwan, Reed; Melton, Genevieve B.; Knoll, Benjamin C.; Wang, Yan; Hultman, Gretchen; Dale, Justin L.; Meyer, Tim; Pakhomov, Serguei V.
2016-01-01
Many design considerations must be addressed in order to provide researchers with full text and semantic search of unstructured healthcare data such as clinical notes and reports. Institutions looking at providing this functionality must also address the big data aspects of their unstructured corpora. Because these systems are complex and demand a non-trivial investment, there is an incentive to make the system capable of servicing future needs as well, further complicating the design. We present architectural best practices as lessons learned in the design and implementation NLP-PIER (Patient Information Extraction for Research), a scalable, extensible, and secure system for processing, indexing, and searching clinical notes at the University of Minnesota. PMID:27570663
Preliminary basic performance analysis of the Cedar multiprocessor memory system
NASA Technical Reports Server (NTRS)
Gallivan, K.; Jalby, W.; Turner, S.; Veidenbaum, A.; Wijshoff, H.
1991-01-01
Some preliminary basic results on the performance of the Cedar multiprocessor memory system are presented. Empirical results are presented and used to calibrate a memory system simulator which is then used to discuss the scalability of the system.
A versatile scalable PET processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman
2011-06-01
Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed tomore » accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.« less
Networking and AI systems: Requirements and benefits
NASA Technical Reports Server (NTRS)
1988-01-01
The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.
A Ground Systems Architecture Transition for a Distributed Operations System
NASA Technical Reports Server (NTRS)
Sellers, Donna; Pitts, Lee; Bryant, Barry
2003-01-01
The Marshall Space Flight Center (MSFC) Ground Systems Department (GSD) recently undertook an architecture change in the product line that serves the ISS program. As a result, the architecture tradeoffs between data system product lines that serve remote users versus those that serve control center flight control teams were explored extensively. This paper describes the resulting architecture that will be used in the International Space Station (ISS) payloads program, and the resulting functional breakdown of the products that support this architecture. It also describes the lessons learned from the path that was followed, as a migration of products cause the need to reevaluate the allocation of functions across the architecture. The result is a set of innovative ground system solutions that is scalable so it can support facilities of wide-ranging sizes, from a small site up to large control centers. Effective use of system automation, custom components, design optimization for data management, data storage, data transmissions, and advanced local and wide area networking architectures, plus the effective use of Commercial-Off-The-Shelf (COTS) products, provides flexible Remote Ground System options that can be tailored to the needs of each user. This paper offers a description of the efficiency and effectiveness of the Ground Systems architectural options that have been implemented, and includes successful implementation examples and lessons learned.
Control and Information Systems for the National Ignition Facility
Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...
2017-03-23
Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less
Control and Information Systems for the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunton, Gordon; Casey, Allan; Christensen, Marvin
Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less
Network architecture for global biomedical monitoring service.
Lopez-Casado, Carmen; Tejero-Calado, Juan; Bernal-Martin, Antonio; Lopez-Gomez, Miguel; Romero-Romero, Marco; Quesada, Guillermo; Lorca, Julio; Garcia, Eugenia
2005-01-01
Most of the patients who are in hospitals and, increasingly, patients controlled remotely from their homes, at-home monitoring, are continuously monitored in order to control their evolution. The medical devices used up to now, force the sanitary staff to go to the patients' room to control the biosignals that are being monitored, although in many cases, patients are in perfect conditions. If patient is at home, it is he or she who has to go to the hospital to take the record of the monitored signal. New wireless technologies, such as BlueTooth and WLAN, make possible the deployment of systems that allow the display and storage of those signals in any place where the hospital intranet is accessible. In that way, unnecessary displacements are avoided. This paper presents a network architecture that allows the identification of the biosignal acquisition device as IP network nodes. The system is based on a TCP/IP architecture which is scalable and avoids the deployment of a specific purpose network.
LVFS: A Scalable Petabye/Exabyte Data Storage System
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.
2013-12-01
Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.
Schmideder, Andreas; Severin, Timm Steffen; Cremer, Johannes Heinrich; Weuster-Botz, Dirk
2015-09-20
A pH-controlled parallel stirred-tank bioreactor system was modified for parallel continuous cultivation on a 10 mL-scale by connecting multichannel peristaltic pumps for feeding and medium removal with micro-pipes (250 μm inner diameter). Parallel chemostat processes with Escherichia coli as an example showed high reproducibility with regard to culture volume and flow rates as well as dry cell weight, dissolved oxygen concentration and pH control at steady states (n=8, coefficient of variation <5%). Reliable estimation of kinetic growth parameters of E. coli was easily achieved within one parallel experiment by preselecting ten different steady states. Scalability of milliliter-scale steady state results was demonstrated by chemostat studies with a stirred-tank bioreactor on a liter-scale. Thus, parallel and continuously operated stirred-tank bioreactors on a milliliter-scale facilitate timesaving and cost reducing steady state studies with microorganisms. The applied continuous bioreactor system overcomes the drawbacks of existing miniaturized bioreactors, like poor mass transfer and insufficient process control. Copyright © 2015 Elsevier B.V. All rights reserved.
Ground Operations Autonomous Control and Integrated Health Management
NASA Technical Reports Server (NTRS)
Daniels, James
2014-01-01
The Ground Operations Autonomous Control and Integrated Health Management plays a key role for future ground operations at NASA. The software that is integrated into this system is called G2 2011 Gensym. The purpose of this report is to describe the Ground Operations Autonomous Control and Integrated Health Management with the use of the G2 Gensym software and the G2 NASA toolkit for Integrated System Health Management (ISHM) which is a Computer Software Configuration Item (CSCI). The decision rationale for the use of the G2 platform is to develop a modular capability for ISHM and AC. Toolkit modules include knowledge bases that are generic and can be applied in any application domain module. That way, there's a maximization of reusability, maintainability, and systematic evolution, portability, and scalability. Engine modules are generic, while application modules represent the domain model of a specific application. Furthermore, the NASA toolkit, developed since 2006 (a set of modules), makes it possible to create application domain models quickly, using pre-defined objects that include sensors and components libraries for typical fluid, electrical, and mechanical systems.
NASA Astrophysics Data System (ADS)
Ma, Yun-Ming; Wang, Tie-Jun
2017-10-01
Higher-dimensional quantum system is of great interest owing to the outstanding features exhibited in the implementation of novel fundamental tests of nature and application in various quantum information tasks. High-dimensional quantum logic gate is a key element in scalable quantum computation and quantum communication. In this paper, we propose a scheme to implement a controlled-phase gate between a 2 N -dimensional photon and N three-level artificial atoms. This high-dimensional controlled-phase gate can serve as crucial components of the high-capacity, long-distance quantum communication. We use the high-dimensional Bell state analysis as an example to show the application of this device. Estimates on the system requirements indicate that our protocol is realizable with existing or near-further technologies. This scheme is ideally suited to solid-state integrated optical approaches to quantum information processing, and it can be applied to various system, such as superconducting qubits coupled to a resonator or nitrogen-vacancy centers coupled to a photonic-band-gap structures.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Secure Cryptographic Key Management System (CKMS) Considerations for Smart Grid Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Aldridge, Hal
2011-01-01
In this paper, we examine some unique challenges associated with key management in the Smart Grid and concomitant research initiatives: 1) effectively model security requirements and their implementations, and 2) manage keys and key distribution for very large scale deployments such as Smart Meters over a long period of performance. This will set the stage to: 3) develop innovative, low cost methods to protect keying material, and 4) provide high assurance authentication services. We will present our perspective on key management and will discuss some key issues within the life cycle of a cryptographic key designed to achieve the following:more » 1) control systems designed, installed, operated, and maintained to survive an intentional cyber assault with no loss of critical function, and 2) widespread implementation of methods for secure communication between remote access devices and control centers that are scalable and cost-effective to deploy.« less
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
On-chip electrically controlled routing of photons from a single quantum dot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bentham, C.; Coles, R. J.; Royall, B.
2015-06-01
Electrical control of on-chip routing of photons emitted by a single InAs/GaAs self-assembled quantum dot (SAQD) is demonstrated in a photonic crystal cavity-waveguide system. The SAQD is located inside an H1 cavity, which is coupled to two photonic crystal waveguides. The SAQD emission wavelength is electrically tunable by the quantum-confined Stark effect. When the SAQD emission is brought into resonance with one of two H1 cavity modes, it is preferentially routed to the waveguide to which that mode is selectively coupled. This proof of concept provides the basis for scalable, low-power, high-speed operation of single-photon routers for use in integratedmore » quantum photonic circuits.« less
Further Structural Intelligence for Sensors Cluster Technology in Manufacturing
Mekid, Samir
2006-01-01
With the ever increasing complex sensing and actuating tasks in manufacturing plants, intelligent sensors cluster in hybrid networks becomes a rapidly expanding area. They play a dominant role in many fields from macro and micro scale. Global object control and the ability to self organize into fault-tolerant and scalable systems are expected for high level applications. In this paper, new structural concepts of intelligent sensors and networks with new intelligent agents are presented. Embedding new functionalities to dynamically manage cooperative agents for autonomous machines are interesting key enabling technologies most required in manufacturing for zero defects production.
Scalable Optical-Fiber Communication Networks
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Peterson, John C.
1993-01-01
Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.
High-throughput automated home-cage mesoscopic functional imaging of mouse cortex
Murphy, Timothy H.; Boyd, Jamie D.; Bolaños, Federico; Vanni, Matthieu P.; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M.
2016-01-01
Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514
Direct and system effects of water ingestion into jet engine compresors
NASA Technical Reports Server (NTRS)
Murthy, S. N. B.; Ehresman, C. M.; Haykin, T.
1986-01-01
Water ingestion into aircraft-installed jet engines can arise both during take-off and flight through rain storms, resulting in engine operation with nearly saturated air-water droplet mixture flow. Each of the components of the engine and the system as a whole are affected by water ingestion, aero-thermally and mechanically. The greatest effects arise probably in turbo-machinery. Experimental and model-based results (of relevance to 'immediate' aerothermal changes) in compressors have been obtained to show the effects of film formation on material surfaces, centrifugal redistribution of water droplets, and interphase heat and mass transfer. Changes in the compressor performance affect the operation of the other components including the control and hence the system. The effects on the engine as a whole are obtained through engine simulation with specified water ingestion. The interest is in thrust, specific fuel consumption, surge margin and rotational speeds. Finally two significant aspects of performance changes, scalability and controllability, are discussed in terms of characteristic scales and functional relations.
STAR Online Framework: from Metadata Collection to Event Analysis and System Control
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.
2015-05-01
In preparation for the new era of RHIC running (RHIC-II upgrades and possibly, the eRHIC era), the STAR experiment is expanding its modular Message Interface and Reliable Architecture framework (MIRA). MIRA allowed STAR to integrate meta-data collection, monitoring, and online QA components in a very agile and efficient manner using a messaging infrastructure approach. In this paper, we briefly summarize our past achievements, provide an overview of the recent development activities focused on messaging patterns and describe our experience with the complex event processor (CEP) recently integrated into the MIRA framework. CEP was used in the recent RHIC Run 14, which provided practical use cases. Finally, we present our requirements and expectations for the planned expansion of our systems, which will allow our framework to acquire features typically associated with Detector Control Systems. Special attention is given to aspects related to latency, scalability and interoperability within heterogeneous set of services, various data and meta-data acquisition components coexisting in STAR online domain.
Visual Analytics for Power Grid Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Huang, Zhenyu; Chen, Yousu
2014-01-20
Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less
Flow Cells for Scalable Energy Conversion and Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukundan, Rangachary
2017-10-26
This project is a response to current flow systems that are V-aqueous and not cost effective. It will hopefully enable high energy/ power density flow cells through rational materials and system design.
Mizukami, Amanda; Fernandes-Platzgummer, Ana; Carmelo, Joana G; Swiech, Kamilla; Covas, Dimas T; Cabral, Joaquim M S; da Silva, Cláudia L
2016-08-01
Mesenchymal stem/stromal cells (MSC) are being widely explored as promising candidates for cell-based therapies. Among the different human MSC origins exploited, umbilical cord represents an attractive and readily available source of MSC that involves a non-invasive collection procedure. In order to achieve relevant cell numbers of human MSC for clinical applications, it is crucial to develop scalable culture systems that allow bioprocess control and monitoring, combined with the use of serum/xenogeneic (xeno)-free culture media. In the present study, we firstly established a spinner flask culture system combining gelatin-based Cultispher(®) S microcarriers and xeno-free culture medium for the expansion of umbilical cord matrix (UCM)-derived MSC. This system enabled the production of 2.4 (±1.1) x10(5) cells/mL (n = 4) after 5 days of culture, corresponding to a 5.3 (±1.6)-fold increase in cell number. The established protocol was then implemented in a stirred-tank bioreactor (800 mL working volume) (n = 3) yielding 115 million cells after 4 days. Upon expansion under stirred conditions, cells retained their differentiation ability and immunomodulatory potential. The development of a scalable microcarrier-based stirred culture system, using xeno-free culture medium that suits the intrinsic features of UCM-derived MSC represents an important step towards a GMP compliant large-scale production platform for these promising cell therapy candidates. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Design of a Multi Dimensional Database for the Archimed DataWarehouse.
Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine
2005-01-01
The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.
Trainable hardware for dynamical computing using error backpropagation through physical media.
Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-24
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
Trainable hardware for dynamical computing using error backpropagation through physical media
NASA Astrophysics Data System (ADS)
Hermans, Michiel; Burm, Michaël; van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-01
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation—a crucial step for tuning such systems towards a specific task—can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
Cheetah: A Framework for Scalable Hierarchical Collective Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S
2011-01-01
Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passingmore » Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.« less
Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku
2013-01-01
Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426
A reference architecture for integrated EHR in Colombia.
de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd
2011-01-01
The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.
Implementation of the Timepix ASIC in the Scalable Readout System
NASA Astrophysics Data System (ADS)
Lupberger, M.; Desch, K.; Kaminski, J.
2016-09-01
We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Towards Scalable Entangled Photon Sources with Self-Assembled InAs /GaAs Quantum Dots
NASA Astrophysics Data System (ADS)
Wang, Jianping; Gong, Ming; Guo, G.-C.; He, Lixin
2015-08-01
The biexciton cascade process in self-assembled quantum dots (QDs) provides an ideal system for realizing deterministic entangled photon-pair sources, which are essential to quantum information science. The entangled photon pairs have recently been generated in experiments after eliminating the fine-structure splitting (FSS) of excitons using a number of different methods. Thus far, however, QD-based sources of entangled photons have not been scalable because the wavelengths of QDs differ from dot to dot. Here, we propose a wavelength-tunable entangled photon emitter mounted on a three-dimensional stressor, in which the FSS and exciton energy can be tuned independently, thereby enabling photon entanglement between dissimilar QDs. We confirm these results via atomistic pseudopotential calculations. This provides a first step towards future realization of scalable entangled photon generators for quantum information applications.
Communication and complexity in a GRN-based multicellular system for graph colouring.
Buck, Moritz; Nehaniv, Chrystopher L
2008-01-01
Artificial Genetic Regulatory Networks (GRNs) are interesting control models through their simplicity and versatility. They can be easily implemented, evolved and modified, and their similarity to their biological counterparts makes them interesting for simulations of life-like systems as well. These aspects suggest they may be perfect control systems for distributed computing in diverse situations, but to be usable for such applications the computational power and evolvability of GRNs need to be studied. In this research we propose a simple distributed system implementing GRNs to solve the well known NP-complete graph colouring problem. Every node (cell) of the graph to be coloured is controlled by an instance of the same GRN. All the cells communicate directly with their immediate neighbours in the graph so as to set up a good colouring. The quality of this colouring directs the evolution of the GRNs using a genetic algorithm. We then observe the quality of the colouring for two different graphs according to different communication protocols and the number of different proteins in the cell (a measure for the possible complexity of a GRN). Those two points, being the main scalability issues that any computational paradigm raises, will then be discussed.
Observability-Based Guidance and Sensor Placement
NASA Astrophysics Data System (ADS)
Hinson, Brian T.
Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.
Deceit: A flexible distributed file system
NASA Technical Reports Server (NTRS)
Siegel, Alex; Birman, Kenneth; Marzullo, Keith
1989-01-01
Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...
2016-10-06
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
A scalable SIMD digital signal processor for high-quality multifunctional printer systems
NASA Astrophysics Data System (ADS)
Kang, Hyeong-Ju; Choi, Yongwoo; Kim, Kimo; Park, In-Cheol; Kim, Jung-Wook; Lee, Eul-Hwan; Gahang, Goo-Soo
2005-01-01
This paper describes a high-performance scalable SIMD digital signal processor (DSP) developed for multifunctional printer systems. The DSP supports a variable number of datapaths to cover a wide range of performance and maintain a RISC-like pipeline structure. Many special instructions suitable for image processing algorithms are included in the DSP. Quad/dual instructions are introduced for 8-bit or 16-bit data, and bit-field extraction/insertion instructions are supported to process various data types. Conditional instructions are supported to deal with complex relative conditions efficiently. In addition, an intelligent DMA block is integrated to align data in the course of data reading. Experimental results show that the proposed DSP outperforms a high-end printer-system DSP by at least two times.
Asynchronous Object Storage with QoS for Scientific and Commercial Big Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brim, Michael J; Dillow, David A; Oral, H Sarp
2013-01-01
This paper presents our design for an asynchronous object storage system intended for use in scientific and commercial big data workloads. Use cases from the target workload do- mains are used to motivate the key abstractions used in the application programming interface (API). The architecture of the Scalable Object Store (SOS), a prototype object stor- age system that supports the API s facilities, is presented. The SOS serves as a vehicle for future research into scalable and resilient big data object storage. We briefly review our research into providing efficient storage servers capable of providing quality of service (QoS) contractsmore » relevant for big data use cases.« less
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang
2007-12-01
In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.
Xiang, Yu; Chen, Chen; Zhang, Chongfu; Qiu, Kun
2013-01-14
In this paper, we propose and demonstrate a novel integrated radio-over-fiber passive optical network (RoF-PON) system for both wired and wireless access. By utilizing the polarization multiplexed four-wave mixing (FWM) effect in a semiconductor optical amplifier (SOA), scalable generation of multi-frequency millimeter-waves (MMWs) can be provided so as to assist the configuration of multi-frequency wireless access for the wire/wireless access integrated ROF-PON system. In order to obtain a better performance, the polarization multiplexed FWM effect is investigated in detail. Simulation results successfully verify the feasibility of our proposed scheme.
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment III
2012-04-01
NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL- HRM -DW Aberdeen Proving Ground, MD 21005-5425...In this condition, the operator manually controlled all the robotic functions using a COTS Microsoft Xbox* 360 game controller (figure 3). The...Xbox is a trademark of Microsoft Corporation. 5 Figure 3. Game controller (GC/MC). 2.2.3.2 Virtual
UAS Conflict-Avoidance Using Multiagent RL with Abstract Strategy Type Communication
NASA Technical Reports Server (NTRS)
Rebhuhn, Carrie; Knudson, Matt; Tumer, Kagan
2014-01-01
The use of unmanned aerial systems (UAS) in the national airspace is of growing interest to the research community. Safety and scalability of control algorithms are key to the successful integration of autonomous system into a human-populated airspace. In order to ensure safety while still maintaining efficient paths of travel, these algorithms must also accommodate heterogeneity of path strategies of its neighbors. We show that, using multiagent RL, we can improve the speed with which conflicts are resolved in cases with up to 80 aircraft within a section of the airspace. In addition, we show that the introduction of abstract agent strategy types to partition the state space is helpful in resolving conflicts, particularly in high congestion.
Announced Strategy Types in Multiagent RL for Conflict-Avoidance in the National Airspace
NASA Technical Reports Server (NTRS)
Rebhuhn, Carrie; Knudson, Matthew D.; Tumer, Kagan
2014-01-01
The use of unmanned aerial systems (UAS) in the national airspace is of growing interest to the research community. Safety and scalability of control algorithms are key to the successful integration of autonomous system into a human-populated airspace. In order to ensure safety while still maintaining efficient paths of travel, these algorithms must also accommodate heterogeneity of path strategies of its neighbors. We show that, using multiagent RL, we can improve the speed with which conflicts are resolved in cases with up to 80 aircraft within a section of the airspace. In addition, we show that the introduction of abstract agent strategy types to partition the state space is helpful in resolving conflicts, particularly in high congestion.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes
Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme. PMID:28072851
Argonne Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-01-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distribu...
NASA Technical Reports Server (NTRS)
Bugby, D. C.; Farmer, J. T.; Stouffer, C. J.
2013-01-01
This paper describes the development and testing of a scalable thermal control architecture for instruments, subsystems, or systems that must operate in severe space environments with wide variations in sink temperature. The architecture is comprised by linking one or more hot-side variable conductance heat pipes (VCHPs) in series with one or more cold-side loop heat pipes (LHPs). The VCHPs provide wide area heat acquisition, limited distance thermal transport, modest against gravity pumping, concentrated LHP startup heating, and high switching ratio variable conductance operation. The LHPs provide localized heat acquisition, long distance thermal transport, significant against gravity pumping, and high switching ratio variable conductance operation. Combining two variable conductance devices in series ensures very high switching ratio isolation from severe environments like the Earth's moon, where each lunar day spans 15 Earth days (270 K sink, with a surface-shielded/space viewing radiator) and each lunar night spans 15 Earth days (80-100 K radiative sink, depending on location). The single VCHP-single LHP system described herein was developed to maintain thermal control of International Lunar Network (ILN) anchor node lander electronics, but it is also applicable to other variable heat rejection space missions in severe environments. The LHPVCHP system utilizes a stainless steel wire mesh wick ammonia VCHP, a Teflon wick propylene LHP, a pair of one-third square meter high ? radiators (one capillary-pumped horizontal radiator and a second gravity-fed vertical radiator), a half-meter of transport distance, and a wick-bearing co-located flow regulator (CLFR) to allow operation with a hot (deactivated) radiator. The VCHP was designed with a small reservoir formed by extending the length of its stainless steel heat pipe tubing. The system was able to provide end-to-end switching ratios of 300-500 during thermal vacuum testing at ATK, including 3-5 W/K ON conductance and 0.01 W/K OFF conductance. The test results described herein also include an in-depth analysis of VCHP condenser performance to explain VCHP switching operation in detail. Future multi-VCHP/multi-LHP thermal management system concepts that provide scalability to higher powers/longer transport lengths are also discussed in the paper.
17 CFR 37.1400 - Core Principle 14-System safeguards.
Code of Federal Regulations, 2014 CFR
2014-04-01
... procedures, and automated systems, that: (1) Are reliable and secure; and (2) Have adequate scalable capacity... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Core Principle 14-System... SWAP EXECUTION FACILITIES System Safeguards § 37.1400 Core Principle 14—System safeguards. The swap...
Bae, Won-Gyu; Kim, Hong Nam; Kim, Doogon; Park, Suk-Hee; Jeong, Hoon Eui; Suh, Kahp-Yang
2014-02-01
Multiscale, hierarchically patterned surfaces, such as lotus leaves, butterfly wings, adhesion pads of gecko lizards are abundantly found in nature, where microstructures are usually used to strengthen the mechanical stability while nanostructures offer the main functionality, i.e., wettability, structural color, or dry adhesion. To emulate such hierarchical structures in nature, multiscale, multilevel patterning has been extensively utilized for the last few decades towards various applications ranging from wetting control, structural colors, to tissue scaffolds. In this review, we highlight recent advances in scalable multiscale patterning to bring about improved functions that can even surpass those found in nature, with particular focus on the analogy between natural and synthetic architectures in terms of the role of different length scales. This review is organized into four sections. First, the role and importance of multiscale, hierarchical structures is described with four representative examples. Second, recent achievements in multiscale patterning are introduced with their strengths and weaknesses. Third, four application areas of wetting control, dry adhesives, selectively filtrating membranes, and multiscale tissue scaffolds are overviewed by stressing out how and why multiscale structures need to be incorporated to carry out their performances. Finally, we present future directions and challenges for scalable, multiscale patterned surfaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stochastic Multiscale Analysis and Design of Engine Disks
2010-07-28
shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University
Quantitative safety assessment of air traffic control systems through system control capacity
NASA Astrophysics Data System (ADS)
Guo, Jingjing
Quantitative Safety Assessments (QSA) are essential to safety benefit verification and regulations of developmental changes in safety critical systems like the Air Traffic Control (ATC) systems. Effectiveness of the assessments is particularly desirable today in the safe implementations of revolutionary ATC overhauls like NextGen and SESAR. QSA of ATC systems are however challenged by system complexity and lack of accident data. Extending from the idea "safety is a control problem" in the literature, this research proposes to assess system safety from the control perspective, through quantifying a system's "control capacity". A system's safety performance correlates to this "control capacity" in the control of "safety critical processes". To examine this idea in QSA of the ATC systems, a Control-capacity Based Safety Assessment Framework (CBSAF) is developed which includes two control capacity metrics and a procedural method. The two metrics are Probabilistic System Control-capacity (PSC) and Temporal System Control-capacity (TSC); each addresses an aspect of a system's control capacity. And the procedural method consists three general stages: I) identification of safety critical processes, II) development of system control models and III) evaluation of system control capacity. The CBSAF was tested in two case studies. The first one assesses an en-route collision avoidance scenario and compares three hypothetical configurations. The CBSAF was able to capture the uncoordinated behavior between two means of control, as was observed in a historic midair collision accident. The second case study compares CBSAF with an existing risk based QSA method in assessing the safety benefits of introducing a runway incursion alert system. Similar conclusions are reached between the two methods, while the CBSAF has the advantage of simplicity and provides a new control-based perspective and interpretation to the assessments. The case studies are intended to investigate the potential and demonstrate the utilities of CBSAF and are not intended for thorough studies of collision avoidance and runway incursions safety, which are extremely challenging problems. Further development and thorough validations are required to allow CBSAF to reach implementation phases, e.g. addressing the issues of limited scalability and subjectivity.
An overview of the heterogeneous telescope network system: Concept, scalability and operation
NASA Astrophysics Data System (ADS)
White, R. R.; Allan, A.
2008-03-01
In the coming decade there will be an avalanche of data streams devoted to astronomical exploration opening new windows of scientific discovery. The shear volume of data and the diversity of event types (Kantor 2006; Kaiser 2004; Vestrand & Theiler & Wozniak 2004) will necessitate; the move to a common language for the communication of event data, and enabling telescope systems with the ability to not just simply respond, but to act independently in order to take full advantage of available resources in a timely manner. Developed over the past three years, the Virtual Observatory Event (VOEvent) provides the best format for carrying these diverse event messages (White et al. 2006a; Seaman & Warner 2006). However, in order for the telescopes to be able to act independently, a system of interoperable network nodes must be in place, that will allow the astronomical assets to not only issue event notifications, but to coordinate and request specific observations. The Heterogeneous Telescope Network (HTN) is a network architecture that can achieve the goals set forth and provide a scalable design to match both fully autonomous and manual telescope system needs (Allan et al. 2006a; White et al. 2006b; Hessman 2006b). In this paper we will show the design concept of this meta-network and nodes, their scalable architecture and complexity, and how this concept can meet the needs of institutions in the near future.
Integrating Technology into Standard Weight Loss Treatment: A Randomized Controlled Trial
Spring, Bonnie; Duncan, Jennifer M.; Janke, E. Amy; Kozak, Andrea T.; McFadden, H. Gene; DeMott, Andrew; Pictor, Alex; Epstein, Leonard H.; Siddique, Juned; Pellegrini, Christine A.; Buscemi, Joanna; Hedeker, Donald
2013-01-01
Background A challenge in the delivery of intensive obesity treatment is making care scalable. Little is known about whether the outcome of clinician-directed weight loss treatment can be improved by adding mobile technology. Methods We conducted a 2-arm, 12-month study (between October, 2007 and September, 2010). Seventy adults (body mass index [BMI] >25 and ≤ 40 kg/m2) were randomly assigned to either standard of care group treatment alone (Standard) or Standard + connective mobile technology system (+Mobile). Participants attended biweekly weight loss groups held by the VA outpatient clinic. The +Mobile group was provided personal digital assistants (PDAs) to self-monitor diet and physical activity; they also received biweekly coaching calls for 6 months. Weight was measured at baseline, 3, 6, 9, and 12 months follow-up. Results Sixty-nine adults received intervention (mean age 57.7 years, 85.5% male). A longitudinal intent-to-treat analysis indicated that the +Mobile group lost on average 8.6 more pounds (representing 3.1% more weight loss relative to the control) than the Standard group at each post-baseline time point, 95% CI [4.9, 12.2]. As compared to the Standard group, the +Mobile group had significantly greater odds of having lost 5% or more of their baseline weight at each post-baseline time point [OR= 6.5; 95% CI = 2.3, 18.6]. Conclusions The addition of a PDA and telephone coaching can enhance short-term weight loss in combination with an existing system of care. Mobile connective technology holds promise as a scalable delivery mechanism to augment the impact of clinician-delivered weight loss treatment. PMID:23229890
Sijbrandij, Marit; Acarturk, Ceren; Bird, Martha; Bryant, Richard A; Burchert, Sebastian; Carswell, Kenneth; de Jong, Joop; Dinesen, Cecilie; Dawson, Katie S; El Chammay, Rabih; van Ittersum, Linde; Jordans, Mark; Knaevelsrud, Christine; McDaid, David; Miller, Kenneth; Morina, Naser; Park, A-La; Roberts, Bayard; van Son, Yvette; Sondorp, Egbert; Pfaltz, Monique C; Ruttenberg, Leontien; Schick, Matthis; Schnyder, Ulrich; van Ommeren, Mark; Ventevogel, Peter; Weissbecker, Inka; Weitz, Erica; Wiedemann, Nana; Whitney, Claire; Cuijpers, Pim
2017-01-01
The crisis in Syria has resulted in vast numbers of refugees seeking asylum in Syria's neighbouring countries as well as in Europe. Refugees are at considerable risk of developing common mental disorders, including depression, anxiety, and posttraumatic stress disorder (PTSD). Most refugees do not have access to mental health services for these problems because of multiple barriers in national and refugee specific health systems, including limited availability of mental health professionals. To counter some of challenges arising from limited mental health system capacity the World Health Organization (WHO) has developed a range of scalable psychological interventions aimed at reducing psychological distress and improving functioning in people living in communities affected by adversity. These interventions, including Problem Management Plus (PM+) and its variants, are intended to be delivered through individual or group face-to-face or smartphone formats by lay, non-professional people who have not received specialized mental health training, We provide an evidence-based rationale for the use of the scalable PM+ oriented programmes being adapted for Syrian refugees and provide information on the newly launched STRENGTHS programme for adapting, testing and scaling up of PM+ in various modalities in both neighbouring and European countries hosting Syrian refugees.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
Sijbrandij, Marit; Acarturk, Ceren; Bird, Martha; Bryant, Richard A; Burchert, Sebastian; Carswell, Kenneth; de Jong, Joop; Dinesen, Cecilie; Dawson, Katie S.; El Chammay, Rabih; van Ittersum, Linde; Jordans, Mark; Knaevelsrud, Christine; McDaid, David; Miller, Kenneth; Morina, Naser; Park, A-La; Roberts, Bayard; van Son, Yvette; Sondorp, Egbert; Pfaltz, Monique C.; Ruttenberg, Leontien; Schick, Matthis; Schnyder, Ulrich; van Ommeren, Mark; Ventevogel, Peter; Weissbecker, Inka; Weitz, Erica; Wiedemann, Nana; Whitney, Claire; Cuijpers, Pim
2017-01-01
ABSTRACT The crisis in Syria has resulted in vast numbers of refugees seeking asylum in Syria’s neighbouring countries as well as in Europe. Refugees are at considerable risk of developing common mental disorders, including depression, anxiety, and posttraumatic stress disorder (PTSD). Most refugees do not have access to mental health services for these problems because of multiple barriers in national and refugee specific health systems, including limited availability of mental health professionals. To counter some of challenges arising from limited mental health system capacity the World Health Organization (WHO) has developed a range of scalable psychological interventions aimed at reducing psychological distress and improving functioning in people living in communities affected by adversity. These interventions, including Problem Management Plus (PM+) and its variants, are intended to be delivered through individual or group face-to-face or smartphone formats by lay, non-professional people who have not received specialized mental health training, We provide an evidence-based rationale for the use of the scalable PM+ oriented programmes being adapted for Syrian refugees and provide information on the newly launched STRENGTHS programme for adapting, testing and scaling up of PM+ in various modalities in both neighbouring and European countries hosting Syrian refugees. PMID:29163867
Predictive functional control for active queue management in congested TCP/IP networks.
Bigdeli, N; Haeri, M
2009-01-01
Predictive functional control (PFC) as a new active queue management (AQM) method in dynamic TCP networks supporting explicit congestion notification (ECN) is proposed. The ability of the controller in handling system delay along with its simplicity and low computational load makes PFC a privileged AQM method in the high speed networks. Besides, considering the disturbance term (which represents model/process mismatches, external disturbances, and existing noise) in the control formulation adds some level of robustness into the PFC-AQM controller. This is an important and desired property in the control of dynamically-varying computer networks. In this paper, the controller is designed based on a small signal linearized fluid-flow model of the TCP/AQM networks. Then, closed-loop transfer function representation of the system is derived to analyze the robustness with respect to the network and controller parameters. The analytical as well as the packet-level ns-2 simulation results show the out-performance of the developed controller for both queue regulation and resource utilization. Fast response, low queue fluctuations (and consequently low delay jitter), high link utilization, good disturbance rejection, scalability, and low packet marking probability are other features of the developed method with respect to other well-known AQM methods such as RED, PI, and REM which are also simulated for comparison.
UPM: unified policy-based network management
NASA Astrophysics Data System (ADS)
Law, Eddie; Saxena, Achint
2001-07-01
Besides providing network management to the Internet, it has become essential to offer different Quality of Service (QoS) to users. Policy-based management provides control on network routers to achieve this goal. The Internet Engineering Task Force (IETF) has proposed a two-tier architecture whose implementation is based on the Common Open Policy Service (COPS) protocol and Lightweight Directory Access Protocol (LDAP). However, there are several limitations to this design such as scalability and cross-vendor hardware compatibility. To address these issues, we present a functionally enhanced multi-tier policy management architecture design in this paper. Several extensions are introduced thereby adding flexibility and scalability. In particular, an intermediate entity between the policy server and policy rule database called the Policy Enforcement Agent (PEA) is introduced. By keeping internal data in a common format, using a standard protocol, and by interpreting and translating request and decision messages from multi-vendor hardware, this agent allows a dynamic Unified Information Model throughout the architecture. We have tailor-made this unique information system to save policy rules in the directory server and allow executions of policy rules with dynamic addition of new equipment during run-time.
Food Safety in Low and Middle Income Countries
Grace, Delia
2015-01-01
Evidence on foodborne disease (FBD) in low and middle income countries (LMICs) is still limited, but important studies in recent years have broadened our understanding. These suggest that developing country consumers are concerned about FBD; that most of the known burden of FBD disease comes from biological hazards; and, that most FBD is the result of consumption of fresh, perishable foods sold in informal markets. FBD is likely to increase in LMICs as the result of massive increases in the consumption of risky foods (livestock and fish products and produce) and lengthening and broadening value chains. Although intensification of agricultural production is a strong trend, so far agro-industrial production and modern retail have not demonstrated clear advantages in food safety and disease control. There is limited evidence on effective, sustainable and scalable interventions to improve food safety in domestic markets. Training farmers on input use and good practices often benefits those farmers trained, but has not been scalable or sustainable, except where good practices are linked to eligibility for export. Training informal value chain actors who receive business benefits from being trained has been more successful. New technologies, growing public concern and increased emphasis on food system governance can also improve food safety. PMID:26343693
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hyun-Kyung; Bak, Seong-Min; Lee, Suk Woo
Graphene nanomeshes (GNMs) with nanoscale periodic or quasi-periodic nanoholes have attracted considerable interest because of unique features such as their open energy band gap, enlarged specific surface area, and high optical transmittance. These features are useful for applications in semiconducting devices, photocatalysis, sensors, and energy-related systems. We report on the facile and scalable preparation of multifunctional micron-scale GNMs with high-density of nanoperforations by catalytic carbon gasification. The catalytic carbon gasification process induces selective decomposition on the graphene adjacent to the metal catalyst, thus forming nanoperforations. Furthermore, the pore size, pore density distribution, and neck size of the GNMs can bemore » controlled by adjusting the size and fraction of the metal oxide on graphene. The fabricated GNM electrodes exhibit superior electrochemical properties for supercapacitor (ultracapacitor) applications, including exceptionally high capacitance (253 F g -1 at 1 A g -1) and high rate capability (212 F g -1 at 100 A g -1) with excellent cycle stability (91% of the initial capacitance after 50 000 charge/discharge cycles). Moreover, the edge-enriched structure of GNMs plays an important role in achieving edge-selected and high-level nitrogen doping.« less
Kim, Hyun-Kyung; Bak, Seong-Min; Lee, Suk Woo; ...
2016-01-27
Graphene nanomeshes (GNMs) with nanoscale periodic or quasi-periodic nanoholes have attracted considerable interest because of unique features such as their open energy band gap, enlarged specific surface area, and high optical transmittance. These features are useful for applications in semiconducting devices, photocatalysis, sensors, and energy-related systems. We report on the facile and scalable preparation of multifunctional micron-scale GNMs with high-density of nanoperforations by catalytic carbon gasification. The catalytic carbon gasification process induces selective decomposition on the graphene adjacent to the metal catalyst, thus forming nanoperforations. Furthermore, the pore size, pore density distribution, and neck size of the GNMs can bemore » controlled by adjusting the size and fraction of the metal oxide on graphene. The fabricated GNM electrodes exhibit superior electrochemical properties for supercapacitor (ultracapacitor) applications, including exceptionally high capacitance (253 F g -1 at 1 A g -1) and high rate capability (212 F g -1 at 100 A g -1) with excellent cycle stability (91% of the initial capacitance after 50 000 charge/discharge cycles). Moreover, the edge-enriched structure of GNMs plays an important role in achieving edge-selected and high-level nitrogen doping.« less
Food Safety in Low and Middle Income Countries.
Grace, Delia
2015-08-27
Evidence on foodborne disease (FBD) in low and middle income countries (LMICs) is still limited, but important studies in recent years have broadened our understanding. These suggest that developing country consumers are concerned about FBD; that most of the known burden of FBD disease comes from biological hazards; and, that most FBD is the result of consumption of fresh, perishable foods sold in informal markets. FBD is likely to increase in LMICs as the result of massive increases in the consumption of risky foods (livestock and fish products and produce) and lengthening and broadening value chains. Although intensification of agricultural production is a strong trend, so far agro-industrial production and modern retail have not demonstrated clear advantages in food safety and disease control. There is limited evidence on effective, sustainable and scalable interventions to improve food safety in domestic markets. Training farmers on input use and good practices often benefits those farmers trained, but has not been scalable or sustainable, except where good practices are linked to eligibility for export. Training informal value chain actors who receive business benefits from being trained has been more successful. New technologies, growing public concern and increased emphasis on food system governance can also improve food safety.
Digital quantum simulators in a scalable architecture of hybrid spin-photon qubits
Chiesa, Alessandro; Santini, Paolo; Gerace, Dario; Raftery, James; Houck, Andrew A.; Carretta, Stefano
2015-01-01
Resolving quantum many-body problems represents one of the greatest challenges in physics and physical chemistry, due to the prohibitively large computational resources that would be required by using classical computers. A solution has been foreseen by directly simulating the time evolution through sequences of quantum gates applied to arrays of qubits, i.e. by implementing a digital quantum simulator. Superconducting circuits and resonators are emerging as an extremely promising platform for quantum computation architectures, but a digital quantum simulator proposal that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is presently lacking. Here we propose a viable scheme to implement a universal quantum simulator with hybrid spin-photon qubits in an array of superconducting resonators, which is intrinsically scalable and allows for local control. As representative examples we consider the transverse-field Ising model, a spin-1 Hamiltonian, and the two-dimensional Hubbard model and we numerically simulate the scheme by including the main sources of decoherence. PMID:26563516
Development of Mission Enabling Infrastructure — Cislunar Autonomous Positioning System (CAPS)
NASA Astrophysics Data System (ADS)
Cheetham, B. W.
2017-10-01
Advanced Space, LLC is developing the Cislunar Autonomous Positioning System (CAPS) which would provide a scalable and evolvable architecture for navigation to reduce ground congestion and improve operations for missions throughout cislunar space.
Scalable quantum computer architecture with coupled donor-quantum dot qubits
Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey
2014-08-26
A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.
Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan; ...
2017-08-16
A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan
A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.
2011-02-01
For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing themore » autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.« less
Scalable screen-size enlargement by multi-channel viewing-zone scanning holography.
Takaki, Yasuhiro; Nakaoka, Mitsuki
2016-08-08
Viewing-zone scanning holographic displays can enlarge both the screen size and the viewing zone. However, limitations exist in the screen size enlargement process even if the viewing zone is effectively enlarged. This study proposes a multi-channel viewing-zone scanning holographic display comprising multiple projection systems and a planar scanner to enable the scalable enlargement of the screen size. Each projection system produces an enlarged image of the screen of a MEMS spatial light modulator. The multiple enlarged images produced by the multiple projection systems are seamlessly tiled on the planar scanner. This screen size enlargement process reduces the viewing zones of the projection systems, which are horizontally scanned by the planar scanner comprising a rotating off-axis lens and a vertical diffuser to enlarge the viewing zone. A screen size of 7.4 in. and a viewing-zone angle of 43.0° are demonstrated.
Platform for efficient switching between multiple devices in the intensive care unit.
De Backere, F; Vanhove, T; Dejonghe, E; Feys, M; Herinckx, T; Vankelecom, J; Decruyenaere, J; De Turck, F
2015-01-01
This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.
From fuzzy recurrence plots to scalable recurrence networks of time series
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2017-04-01
Recurrence networks, which are derived from recurrence plots of nonlinear time series, enable the extraction of hidden features of complex dynamical systems. Because fuzzy recurrence plots are represented as grayscale images, this paper presents a variety of texture features that can be extracted from fuzzy recurrence plots. Based on the notion of fuzzy recurrence plots, defuzzified, undirected, and unweighted recurrence networks are introduced. Network measures can be computed for defuzzified recurrence networks that are scalable to meet the demand for the network-based analysis of big data.
Functionality limit of classical simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2015-09-01
By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.
Alves, F.
2015-01-01
We prepared new and scalable, hybrid inorganic–organic step-growth hydrogels with polyhedral oligomeric silsesquioxane (POSS) network knot construction elements and hydrolytically degradable poly(ethylene glycol) (PEG) di-ester macromonomers by in situ radical-mediated thiol–ene photopolymerization. The physicochemical properties of the gels are fine-tailored over orders of magnitude including functionalization of their interior, a hierarchical gel structure, and biodegradability. PMID:25821524
An Ephemeral Burst-Buffer File System for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Teng; Moody, Adam; Yu, Weikuan
BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.
Constrained tri-sphere kinematic positioning system
Viola, Robert J
2010-12-14
A scalable and adaptable, six-degree-of-freedom, kinematic positioning system is described. The system can position objects supported on top of, or suspended from, jacks comprising constrained joints. The system is compatible with extreme low temperature or high vacuum environments. When constant adjustment is not required a removable motor unit is available.
Developing, Implementing, and Assessing an Early Alert System
ERIC Educational Resources Information Center
Tampke, Dale R.
2013-01-01
Early alert systems offer institutions systematic approaches to identifying and intervening with students exhibiting at-risk behaviors. Many of these systems rely on a common format for student referral to central receiving point. Systems at larger institutions often use web-based technology to allow for a scalable (available campus wide) approach…
Skills Training to Avoid Inadvertent Plagiarism: Results from a Randomised Control Study
ERIC Educational Resources Information Center
Newton, Fiona J.; Wright, Jill D.; Newton, Joshua D.
2014-01-01
Plagiarism continues to be a concern within academic institutions. The current study utilised a randomised control trial of 137 new entry tertiary students to assess the efficacy of a scalable short training session on paraphrasing, patch writing and plagiarism. The results indicate that the training significantly enhanced students' overall…
A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.
ERIC Educational Resources Information Center
Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin
2003-01-01
Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
NASA Astrophysics Data System (ADS)
Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.
2006-12-01
This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
Design of an Intelligent Front-End Signal Conditioning Circuit for IR Sensors
NASA Astrophysics Data System (ADS)
de Arcas, G.; Ruiz, M.; Lopez, J. M.; Gutierrez, R.; Villamayor, V.; Gomez, L.; Montojo, Mª. T.
2008-02-01
This paper presents the design of an intelligent front-end signal conditioning system for IR sensors. The system has been developed as an interface between a PbSe IR sensor matrix and a TMS320C67x digital signal processor. The system architecture ensures its scalability so it can be used for sensors with different matrix sizes. It includes an integrator based signal conditioning circuit, a data acquisition converter block, and a FPGA based advanced control block that permits including high level image preprocessing routines such as faulty pixel detection and sensor calibration in the signal conditioning front-end. During the design phase virtual instrumentation technologies proved to be a very valuable tool for prototyping when choosing the best A/D converter type for the application. Development time was significantly reduced due to the use of this technology.
MEMS Device Being Developed for Active Cooling and Temperature Control
NASA Technical Reports Server (NTRS)
Moran, Matthew E.
2001-01-01
High-capacity cooling options remain limited for many small-scale applications such as microelectronic components, miniature sensors, and microsystems. A microelectromechanical system (MEMS) is currently under development at the NASA Glenn Research Center to meet this need. It uses a thermodynamic cycle to provide cooling or heating directly to a thermally loaded surface. The device can be used strictly in the cooling mode, or it can be switched between cooling and heating modes in milliseconds for precise temperature control. Fabrication and assembly are accomplished by wet etching and wafer bonding techniques routinely used in the semiconductor processing industry. Benefits of the MEMS cooler include scalability to fractions of a millimeter, modularity for increased capacity and staging to low temperatures, simple interfaces and limited failure modes, and minimal induced vibration.
Reactor application of an improved bundle divertor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, T.F.; Ruck, G.W.; Lee, A.Y.
1978-11-01
A Bundle Divertor was chosen as the impurity control and plasma exhaust system for the beam driven Demonstration Tokamak Hybrid Reactor - DTHR. In the context of a preconceptual design study of the reactor and associated facility a bundle divertor concept was developed and integrated into the reactor system. The overall system was found feasible and scalable for reactors with intermediate torodial field strengths on axis. The important design characteristics are: the overall average current density of the divertor coils is 0.73 kA for each tesla of toroidal field on axis; the divertor windings are made from super-conducting cables supportedmore » by steel structures and are designed to be maintainable; the particle collection assembly and auxiliary cryosorption vacuum pump are dual systems designed such that they can be reactivated alterntively to allow for continuous reactor operation; and the power requirement for energizing and operating the divertor is about 5 MW.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schröder, T.; Walsh, M.; Zheng, J.
2017-04-06
Towards building large-scale integrated photonic systems for quantum information processing, spatial and spectral alignment of single quantum systems to photonic nanocavities is required. In this paper, we demonstrate spatially targeted implantation of nitrogen vacancy (NV) centers into the mode maximum of 2-d diamond photonic crystal cavities with quality factors up to 8000, achieving an average of 1.1 ± 0.2 NVs per cavity. Nearly all NV-cavity systems have significant emission intensity enhancement, reaching a cavity-fed spectrally selective intensity enhancement, F int, of up to 93. Although spatial NV-cavity overlap is nearly guaranteed within about 40 nm, spectral tuning of the NV’smore » zero-phonon-line (ZPL) is still necessary after fabrication. To demonstrate spectral control, we temperature tune a cavity into an NV ZPL, yielding F ZPL int~5 at cryogenic temperatures.« less
Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances
NASA Astrophysics Data System (ADS)
Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.
2017-12-01
THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
Symbolic LTL Compilation for Model Checking: Extended Abstract
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2007-01-01
In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.