DOT National Transportation Integrated Search
2006-12-01
Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems
ERIC Educational Resources Information Center
Diamanti, Eirini Ilana
2012-01-01
Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…
Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System
NASA Astrophysics Data System (ADS)
Yue, Liu; Hang, Mend
2018-01-01
With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker
NASA Astrophysics Data System (ADS)
Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong
2017-10-01
Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
NASA Technical Reports Server (NTRS)
Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.
1993-01-01
The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.
Large-scale flow experiments for managing river systems
Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.
2011-01-01
Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.
NASA Technical Reports Server (NTRS)
Aanstoos, J. V.; Snyder, W. E.
1981-01-01
Anticipated major advances in integrated circuit technology in the near future are described as well as their impact on satellite onboard signal processing systems. Dramatic improvements in chip density, speed, power consumption, and system reliability are expected from very large scale integration. Improvements are expected from very large scale integration enable more intelligence to be placed on remote sensing platforms in space, meeting the goals of NASA's information adaptive system concept, a major component of the NASA End-to-End Data System program. A forecast of VLSI technological advances is presented, including a description of the Defense Department's very high speed integrated circuit program, a seven-year research and development effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brancucci Martinez-Anido, C.; Hodge, B. M.
2014-09-01
This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).
Data integration in the era of omics: current and future challenges
2014-01-01
To integrate heterogeneous and large omics data constitutes not only a conceptual challenge but a practical hurdle in the daily analysis of omics data. With the rise of novel omics technologies and through large-scale consortia projects, biological systems are being further investigated at an unprecedented scale generating heterogeneous and often large data sets. These data-sets encourage researchers to develop novel data integration methodologies. In this introduction we review the definition and characterize current efforts on data integration in the life sciences. We have used a web-survey to assess current research projects on data-integration to tap into the views, needs and challenges as currently perceived by parts of the research community. PMID:25032990
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Energy Systems Integration Facility Overview
Arvizu, Dan; Chistensen, Dana; Hannegan, Bryan; Garret, Bobi; Kroposki, Ben; Symko-Davies, Martha; Post, David; Hammond, Steve; Kutscher, Chuck; Wipke, Keith
2018-01-16
The U.S. Department of Energy's Energy Systems Integration Facility (ESIF) is located at the National Renewable Energy Laboratory is the right tool, at the right time... a first-of-its-kind facility that addresses the challenges of large-scale integration of clean energy technologies into the energy systems that power the nation.
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Evolving from bioinformatics in-the-small to bioinformatics in-the-large.
Parker, D Stott; Gorlick, Michael M; Lee, Christopher J
2003-01-01
We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.
ERIC Educational Resources Information Center
Alexander, George
1984-01-01
Discusses small-scale integrated (SSI), medium-scale integrated (MSI), large-scale integrated (LSI), very large-scale integrated (VLSI), and ultra large-scale integrated (ULSI) chips. The development and properties of these chips, uses of gallium arsenide, Josephson devices (two superconducting strips sandwiching a thin insulator), and future…
Quellmalz, Edys S; Pellegrino, James W
2009-01-02
Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.
Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid
NASA Astrophysics Data System (ADS)
Kuwayama, Akira
The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.
The revolution in data gathering systems
NASA Technical Reports Server (NTRS)
Cambra, J. M.; Trover, W. F.
1975-01-01
Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
NASA Technical Reports Server (NTRS)
Greene, P. H.
1972-01-01
Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul
2008-01-01
With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.
Integral criteria for large-scale multiple fingerprint solutions
NASA Astrophysics Data System (ADS)
Ushmaev, Oleg S.; Novikov, Sergey O.
2004-08-01
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
Large-Scale Document Automation: The Systems Integration Issue.
ERIC Educational Resources Information Center
Kalthoff, Robert J.
1985-01-01
Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…
A Navy Shore Activity Manpower Planning System for Civilians. Technical Report No. 24.
ERIC Educational Resources Information Center
Niehaus, R. J.; Sholtz, D.
This report describes the U.S. Navy Shore Activity Manpower Planning System (SAMPS) advanced development research project. This effort is aimed at large-scale feasibility tests of manpower models for large Naval installations. These local planning systems are integrated with Navy-wide information systems on a data-communications network accessible…
Data Integration: Charting a Path Forward to 2035
2011-02-14
New York, NY: Gotham Books, 2004. Seligman , Len. Mitre Corporation, e-mail interview, 6 Dec 2010. Singer, P.W. Wired for War: The Robotics...articles.aspx (accessed 4 Dec 2010). Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie...Virtualization?‖ 1. 41 Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie Mellon Software
A Programmable and Configurable Mixed-Mode FPAA SoC
2016-03-17
A Programmable and Configurable Mixed-Mode FPAA SoC Sahil Shah, Sihwan Kim, Farhan Adil, Jennifer Hasler, Suma George, Michelle Collins, Richard...Abstract: The authors present a Floating-Gate based, System-On-Chip large-scale Field- Programmable Analog Array IC that integrates divergent concepts...Floating-Gate, SoC, Command Word Classification This paper presents a Floating-Gate (FG) based, System- On-Chip (SoC) large-scale Field- Programmable
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
Single-chip microprocessor that communicates directly using light.
Sun, Chen; Wade, Mark T; Lee, Yunsup; Orcutt, Jason S; Alloatti, Luca; Georgas, Michael S; Waterman, Andrew S; Shainline, Jeffrey M; Avizienis, Rimas R; Lin, Sen; Moss, Benjamin R; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H; Cook, Henry M; Ou, Albert J; Leu, Jonathan C; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J; Popović, Miloš A; Stojanović, Vladimir M
2015-12-24
Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems--from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a 'zero-change' approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Renewable Fuels-to-Grid Integration | Energy Systems Integration Facility |
hydrogen, other than electrolysis. Read more about this research. Partnerships Photo of a polymer electrolyte membrane stack in a laboratory Giner NREL helped evaluate a large-scale polymer electrolyte
NASA Astrophysics Data System (ADS)
Parks, Helen Frances
This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work to extend POD to structured settings. In particular, we consider systems evolving on Lie groups and make use of canonical coordinates in the reduction process. We see considerable improvement in the accuracy of the reduced model over the usual structure-agnostic POD approach.
Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent
NASA Astrophysics Data System (ADS)
Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.
2014-06-01
The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.
Single-chip microprocessor that communicates directly using light
NASA Astrophysics Data System (ADS)
Sun, Chen; Wade, Mark T.; Lee, Yunsup; Orcutt, Jason S.; Alloatti, Luca; Georgas, Michael S.; Waterman, Andrew S.; Shainline, Jeffrey M.; Avizienis, Rimas R.; Lin, Sen; Moss, Benjamin R.; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H.; Cook, Henry M.; Ou, Albert J.; Leu, Jonathan C.; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J.; Popović, Miloš A.; Stojanović, Vladimir M.
2015-12-01
Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
A multidisciplinary approach to the development of low-cost high-performance lightwave networks
NASA Technical Reports Server (NTRS)
Maitan, Jacek; Harwit, Alex
1991-01-01
Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.
Transforming Power Systems Through Global Collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-06-01
Ambitious and integrated policy and regulatory frameworks are crucial to achieve power system transformation. The 21st Century Power Partnership -- a multilateral initiative of the Clean Energy Ministerial -- serves as a platform for public-private collaboration to advance integrated solutions for the large-scale deployment of renewable energy in combination with energy efficiency and grid modernization.
A large-scale clinical validation of an integrated monitoring system in the emergency department.
Clifton, David A; Wong, David; Clifton, Lei; Wilson, Sarah; Way, Rob; Pullinger, Richard; Tarassenko, Lionel
2013-07-01
We consider an integrated patient monitoring system, combining electronic patient records with high-rate acquisition of patient physiological data. There remain many challenges in increasing the robustness of "e-health" applications to a level at which they are clinically useful, particularly in the use of automated algorithms used to detect and cope with artifact in data contained within the electronic patient record, and in analyzing and communicating the resultant data for reporting to clinicians. There is a consequential "plague of pilots," in which engineering prototype systems do not enter into clinical use. This paper describes an approach in which, for the first time, the Emergency Department (ED) of a major research hospital has adopted such systems for use during a large clinical trial. We describe the disadvantages of existing evaluation metrics when applied to such large trials, and propose a solution suitable for large-scale validation. We demonstrate that machine learning technologies embedded within healthcare information systems can provide clinical benefit, with the potential to improve patient outcomes in the busy environment of a major ED and other high-dependence areas of patient care.
Establishment of a National Wind Energy Center at University of Houston
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Su Su
The DOE-supported project objectives are to: establish a national wind energy center (NWEC) at University of Houston and conduct research to address critical science and engineering issues for the development of future large MW-scale wind energy production systems, especially offshore wind turbines. The goals of the project are to: (1) establish a sound scientific/technical knowledge base of solutions to critical science and engineering issues for developing future MW-scale large wind energy production systems, (2) develop a state-of-the-art wind rotor blade research facility at the University of Houston, and (3) through multi-disciplinary research, introducing technology innovations on advanced wind-turbine materials, processing/manufacturingmore » technology, design and simulation, testing and reliability assessment methods related to future wind turbine systems for cost-effective production of offshore wind energy. To achieve the goals of the project, the following technical tasks were planned and executed during the period from April 15, 2010 to October 31, 2014 at the University of Houston: (1) Basic research on large offshore wind turbine systems (2) Applied research on innovative wind turbine rotors for large offshore wind energy systems (3) Integration of offshore wind-turbine design, advanced materials and manufacturing technologies (4) Integrity and reliability of large offshore wind turbine blades and scaled model testing (5) Education and training of graduate and undergraduate students and post- doctoral researchers (6) Development of a national offshore wind turbine blade research facility The research program addresses both basic science and engineering of current and future large wind turbine systems, especially offshore wind turbines, for MW-scale power generation. The results of the research advance current understanding of many important scientific issues and provide technical information for solving future large wind turbines with advanced design, composite materials, integrated manufacturing, and structural reliability and integrity. The educational program have trained many graduate and undergraduate students and post-doctoral level researchers to learn critical science and engineering of wind energy production systems through graduate-level courses and research, and participating in various projects in center’s large multi-disciplinary research. These students and researchers are now employed by the wind industry, national labs and universities to support the US and international wind energy industry. The national offshore wind turbine blade research facility developed in the project has been used to support the technical and training tasks planned in the program to accomplish their goals, and it is a national asset which is available for used by domestic and international researchers in the wind energy arena.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, Dylan; Frank, Stephen; Slovensky, Michelle
Rich, well-organized building performance and energy consumption data enable a host of analytic capabilities for building owners and operators, from basic energy benchmarking to detailed fault detection and system optimization. Unfortunately, data integration for building control systems is challenging and costly in any setting. Large portfolios of buildings--campuses, cities, and corporate portfolios--experience these integration challenges most acutely. These large portfolios often have a wide array of control systems, including multiple vendors and nonstandard communication protocols. They typically have complex information technology (IT) networks and cybersecurity requirements and may integrate distributed energy resources into their infrastructure. Although the challenges are significant,more » the integration of control system data has the potential to provide proportionally greater value for these organizations through portfolio-scale analytics, comprehensive demand management, and asset performance visibility. As a large research campus, the National Renewable Energy Laboratory (NREL) experiences significant data integration challenges. To meet them, NREL has developed an architecture for effective data collection, integration, and analysis, providing a comprehensive view of data integration based on functional layers. The architecture is being evaluated on the NREL campus through deployment of three pilot implementations.« less
A large scale software system for simulation and design optimization of mechanical systems
NASA Technical Reports Server (NTRS)
Dopker, Bernhard; Haug, Edward J.
1989-01-01
The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.
NASA Astrophysics Data System (ADS)
Brett, Gareth; Barnett, Matthew
2014-12-01
Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh) hosted at SSE's (Scottish & Southern Energy) 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC) grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.
Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems
NASA Astrophysics Data System (ADS)
Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard
Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.
Du, Hui; Chen, Xiaobo; Xi, Juntong; Yu, Chengyi; Zhao, Bao
2017-12-12
Large-scale surfaces are prevalent in advanced manufacturing industries, and 3D profilometry of these surfaces plays a pivotal role for quality control. This paper proposes a novel and flexible large-scale 3D scanning system assembled by combining a robot, a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. A mathematical model is established for the global data fusion. Subsequently, a robust method is introduced for the establishment of the end coordinate system. As for hand-eye calibration, the calibration ball is observed by the scanner and the laser tracker simultaneously. With this data, the hand-eye relationship is solved, and then an algorithm is built to get the transformation matrix between the end coordinate system and the world coordinate system. A validation experiment is designed to verify the proposed algorithms. Firstly, a hand-eye calibration experiment is implemented and the computation of the transformation matrix is done. Then a car body rear is measured 22 times in order to verify the global data fusion algorithm. The 3D shape of the rear is reconstructed successfully. To evaluate the precision of the proposed method, a metric tool is built and the results are presented.
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
Brawer, Peter A; Martielli, Richard; Pye, Patrice L; Manwaring, Jamie; Tierney, Anna
2010-06-01
The primary care health setting is in crisis. Increasing demand for services, with dwindling numbers of providers, has resulted in decreased access and decreased satisfaction for both patients and providers. Moreover, the overwhelming majority of primary care visits are for behavioral and mental health concerns rather than issues of a purely medical etiology. Integrated-collaborative models of health care delivery offer possible solutions to this crisis. The purpose of this article is to review the existing data available after 2 years of the St. Louis Initiative for Integrated Care Excellence; an example of integrated-collaborative care on a large scale model within a regional Veterans Affairs Health Care System. There is clear evidence that the SLI(2)CE initiative rather dramatically increased access to health care, and modified primary care practitioners' willingness to address mental health issues within the primary care setting. In addition, data suggests strong fidelity to a model of integrated-collaborative care which has been successful in the past. Integrated-collaborative care offers unique advantages to the traditional view and practice of medical care. Through careful implementation and practice, success is possible on a large scale model. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Pioneering University/Industry Venture Explores VLSI Frontiers.
ERIC Educational Resources Information Center
Davis, Dwight B.
1983-01-01
Discusses industry-sponsored programs in semiconductor research, focusing on Stanford University's Center for Integrated Systems (CIS). CIS, while pursuing research in semiconductor very-large-scale integration, is merging the fields of computer science, information science, and physical science. Issues related to these university/industry…
Digital Systems Validation Handbook. Volume 2. Chapter 18. Avionic Data Bus Integration Technology
1993-11-01
interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion software, which make up digital...1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error detection and...formulate all the significant behavior of a system. MULTIVERSION PROGRAMMING. N-version programming. N-VERSION PROGRAMMING. The independent coding of a
Architectural Visualization of C/C++ Source Code for Program Comprehension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panas, T; Epperly, T W; Quinlan, D
2006-09-01
Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
Spin diffusion from an inhomogeneous quench in an integrable system.
Ljubotina, Marko; Žnidarič, Marko; Prosen, Tomaž
2017-07-13
Generalized hydrodynamics predicts universal ballistic transport in integrable lattice systems when prepared in generic inhomogeneous initial states. However, the ballistic contribution to transport can vanish in systems with additional discrete symmetries. Here we perform large scale numerical simulations of spin dynamics in the anisotropic Heisenberg XXZ spin 1/2 chain starting from an inhomogeneous mixed initial state which is symmetric with respect to a combination of spin reversal and spatial reflection. In the isotropic and easy-axis regimes we find non-ballistic spin transport which we analyse in detail in terms of scaling exponents of the transported magnetization and scaling profiles of the spin density. While in the easy-axis regime we find accurate evidence of normal diffusion, the spin transport in the isotropic case is clearly super-diffusive, with the scaling exponent very close to 2/3, but with universal scaling dynamics which obeys the diffusion equation in nonlinearly scaled time.
A fiber-optic ice detection system for large-scale wind turbine blades
NASA Astrophysics Data System (ADS)
Kim, Dae-gil; Sampath, Umesh; Kim, Hyunjin; Song, Minho
2017-09-01
Icing causes substantial problems in the integrity of large-scale wind turbines. In this work, a fiber-optic sensor system for detection of icing with an arrayed waveguide grating is presented. The sensor system detects Fresnel reflections from the ends of the fibers. The transition in Fresnel reflection due to icing gives peculiar intensity variations, which categorizes the ice, the water, and the air medium on the wind turbine blades. From the experimental results, with the proposed sensor system, the formation of icing conditions and thickness of ice were identified successfully in real time.
Co-governing decentralised water systems: an analytical framework.
Yu, C; Brown, R; Morison, P
2012-01-01
Current discourses in urban water management emphasise a diversity of water sources and scales of infrastructure for resilience and adaptability. During the last 2 decades, in particular, various small-scale systems emerged and developed so that the debate has largely moved from centralised versus decentralised water systems toward governing integrated and networked systems of provision and consumption where small-scale technologies are embedded in large-scale centralised infrastructures. However, while centralised systems have established boundaries of ownership and management, decentralised water systems (such as stormwater harvesting technologies for the street, allotment/house scales) do not, therefore the viability for adoption and/or continued use of decentralised water systems is challenged. This paper brings together insights from the literature on public sector governance, co-production and social practices model to develop an analytical framework for co-governing such systems. The framework provides urban water practitioners with guidance when designing co-governance arrangements for decentralised water systems so that these systems continue to exist, and become widely adopted, within the established urban water regime.
NASA Technical Reports Server (NTRS)
Sanders, Bobby W.; Weir, Lois J.
2008-01-01
A new hypersonic inlet for a turbine-based combined-cycle (TBCC) engine has been designed. This split-flow inlet is designed to provide flow to an over-under propulsion system with turbofan and dual-mode scramjet engines for flight from takeoff to Mach 7. It utilizes a variable-geometry ramp, high-speed cowl lip rotation, and a rotating low-speed cowl that serves as a splitter to divide the flow between the low-speed turbofan and the high-speed scramjet and to isolate the turbofan at high Mach numbers. The low-speed inlet was designed for Mach 4, the maximum mode transition Mach number. Integration of the Mach 4 inlet into the Mach 7 inlet imposed significant constraints on the low-speed inlet design, including a large amount of internal compression. The inlet design was used to develop mechanical designs for two inlet mode transition test models: small-scale (IMX) and large-scale (LIMX) research models. The large-scale model is designed to facilitate multi-phase testing including inlet mode transition and inlet performance assessment, controls development, and integrated systems testing with turbofan and scramjet engines.
Forest Ecosystem Analysis Using a GIS
S.G. McNulty; W.T. Swank
1996-01-01
Forest ecosystem studies have expanded spatially in recent years to address large scale environmental issues. We are using a geographic information system (GIS) to understand and integrate forest processes at landscape to regional spatial scales. This paper presents three diverse research studies using a GIS. First, we used a GIS to develop a landscape scale model to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shahidehpour, Mohammad
Integrating 20% or more wind energy into the system and transmitting large sums of wind energy over long distances will require a decision making capability that can handle very large scale power systems with tens of thousands of buses and lines. There is a need to explore innovative analytical and implementation solutions for continuing reliable operations with the most economical integration of additional wind energy in power systems. A number of wind integration solution paths involve the adoption of new operating policies, dynamic scheduling of wind power across interties, pooling integration services, and adopting new transmission scheduling practices. Such practicesmore » can be examined by the decision tool developed by this project. This project developed a very efficient decision tool called Wind INtegration Simulator (WINS) and applied WINS to facilitate wind energy integration studies. WINS focused on augmenting the existing power utility capabilities to support collaborative planning, analysis, and wind integration project implementations. WINS also had the capability of simulating energy storage facilities so that feasibility studies of integrated wind energy system applications can be performed for systems with high wind energy penetrations. The development of WINS represents a major expansion of a very efficient decision tool called POwer Market Simulator (POMS), which was developed by IIT and has been used extensively for power system studies for decades. Specifically, WINS provides the following superiorities; (1) An integrated framework is included in WINS for the comprehensive modeling of DC transmission configurations, including mono-pole, bi-pole, tri-pole, back-to-back, and multi-terminal connection, as well as AC/DC converter models including current source converters (CSC) and voltage source converters (VSC); (2) An existing shortcoming of traditional decision tools for wind integration is the limited availability of user interface, i.e., decision results are often text-based demonstrations. WINS includes a powerful visualization tool and user interface capability for transmission analyses, planning, and assessment, which will be of great interest to power market participants, power system planners and operators, and state and federal regulatory entities; and (3) WINS can handle extended transmission models for wind integration studies. WINS models include limitations on transmission flow as well as bus voltage for analyzing power system states. The existing decision tools often consider transmission flow constraints (dc power flow) alone which could result in the over-utilization of existing resources when analyzing wind integration. WINS can be used to assist power market participants including transmission companies, independent system operators, power system operators in vertically integrated utilities, wind energy developers, and regulatory agencies to analyze economics, security, and reliability of various options for wind integration including transmission upgrades and the planning of new transmission facilities. WINS can also be used by industry for the offline training of reliability and operation personnel when analyzing wind integration uncertainties, identifying critical spots in power system operation, analyzing power system vulnerabilities, and providing credible decisions for examining operation and planning options for wind integration. Researches in this project on wind integration included (1) Development of WINS; (2) Transmission Congestion Analysis in the Eastern Interconnection; (3) Analysis of 2030 Large-Scale Wind Energy Integration in the Eastern Interconnection; (4) Large-scale Analysis of 2018 Wind Energy Integration in the Eastern U.S. Interconnection. The research resulted in 33 papers, 9 presentations, 9 PhD degrees, 4 MS degrees, and 7 awards. The education activities in this project on wind energy included (1) Wind Energy Training Facility Development; (2) Wind Energy Course Development.« less
Efficient data management in a large-scale epidemiology research project.
Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang
2012-09-01
This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Development of analog watch with minute repeater
NASA Astrophysics Data System (ADS)
Okigami, Tomio; Aoyama, Shigeru; Osa, Takashi; Igarashi, Kiyotaka; Ikegami, Tomomi
A complementary metal oxide semiconductor with large scale integration was developed for an electronic minute repeater. It is equipped with the synthetic struck sound circuit to generate natural struck sound necessary for the minute repeater. This circuit consists of an envelope curve drawing circuit, frequency mixer, polyphonic mixer, and booster circuit made by using analog circuit technology. This large scale integration is a single chip microcomputer with motor drivers and input ports in addition to the synthetic struck sound circuit, and it is possible to make an electronic system of minute repeater at a very low cost in comparison with the conventional type.
XLinkDB 2.0: integrated, large-scale structural analysis of protein crosslinking data
Schweppe, Devin K.; Zheng, Chunxiang; Chavez, Juan D.; Navare, Arti T.; Wu, Xia; Eng, Jimmy K.; Bruce, James E.
2016-01-01
Motivation: Large-scale chemical cross-linking with mass spectrometry (XL-MS) analyses are quickly becoming a powerful means for high-throughput determination of protein structural information and protein–protein interactions. Recent studies have garnered thousands of cross-linked interactions, yet the field lacks an effective tool to compile experimental data or access the network and structural knowledge for these large scale analyses. We present XLinkDB 2.0 which integrates tools for network analysis, Protein Databank queries, modeling of predicted protein structures and modeling of docked protein structures. The novel, integrated approach of XLinkDB 2.0 enables the holistic analysis of XL-MS protein interaction data without limitation to the cross-linker or analytical system used for the analysis. Availability and Implementation: XLinkDB 2.0 can be found here, including documentation and help: http://xlinkdb.gs.washington.edu/. Contact: jimbruce@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153666
VLSI (Very Large Scale Integrated) Design of a 16 Bit Very Fast Pipelined Carry Look Ahead Adder.
1983-09-01
the ability for systems engineers to custom design digital integrated circuits. Until recently, the design of integrated circuits has been...traditionally carried out by a select group of logic designers working in semiconductor laboratories. Systems engineers had to "make do" or "fit in" the...products of these labs to realize their designs. The systems engineers had little participation in the actual design of the chip. The MED and CONWAY design
NASA Astrophysics Data System (ADS)
Ali, Hatamirad; Hasan, Mehrjerdi
Automotive industry and car production process is one of the most complex and large-scale production processes. Today, information technology (IT) and ERP systems incorporates a large portion of production processes. Without any integrated systems such as ERP, the production and supply chain processes will be tangled. The ERP systems, that are last generation of MRP systems, make produce and sale processes of these industries easier and this is the major factor of development of these industries anyhow. Today many of large-scale companies are developing and deploying the ERP systems. The ERP systems facilitate many of organization processes and make organization to increase efficiency. The security is a very important part of the ERP strategy at the organization, Security at the ERP systems, because of integrity and extensive, is more important of local and legacy systems. Disregarding of this point can play a giant role at success or failure of this kind of systems. The IRANKHODRO is the biggest automotive factory in the Middle East with an annual production over 600.000 cars. This paper presents ERP security deployment experience at the "IRANKHODRO Company". Recently, by launching ERP systems, it moved a big step toward more developments.
Multidimensional quantum entanglement with large-scale integrated optics.
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G
2018-04-20
The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
A 14 × 14 μm2 footprint polarization-encoded quantum controlled-NOT gate based on hybrid waveguide
Wang, S. M.; Cheng, Q. Q.; Gong, Y. X.; Xu, P.; Sun, C.; Li, L.; Li, T.; Zhu, S. N.
2016-01-01
Photonic quantum information processing system has been widely used in communication, metrology and lithography. The recent emphasis on the miniaturized photonic platform is thus motivated by the urgent need for realizing large-scale information processing and computing. Although the integrated quantum logic gates and quantum algorithms based on path encoding have been successfully demonstrated, the technology for handling another commonly used polarization-encoded qubits has yet to be fully developed. Here, we show the implementation of a polarization-dependent beam-splitter in the hybrid waveguide system. With precisely design, the polarization-encoded controlled-NOT gate can be implemented using only single such polarization-dependent beam-splitter with the significant size reduction of the overall device footprint to 14 × 14 μm2. The experimental demonstration of the highly integrated controlled-NOT gate sets the stage to develop large-scale quantum information processing system. Our hybrid design also establishes the new capabilities in controlling the polarization modes in integrated photonic circuits. PMID:27142992
Wang, S M; Cheng, Q Q; Gong, Y X; Xu, P; Sun, C; Li, L; Li, T; Zhu, S N
2016-05-04
Photonic quantum information processing system has been widely used in communication, metrology and lithography. The recent emphasis on the miniaturized photonic platform is thus motivated by the urgent need for realizing large-scale information processing and computing. Although the integrated quantum logic gates and quantum algorithms based on path encoding have been successfully demonstrated, the technology for handling another commonly used polarization-encoded qubits has yet to be fully developed. Here, we show the implementation of a polarization-dependent beam-splitter in the hybrid waveguide system. With precisely design, the polarization-encoded controlled-NOT gate can be implemented using only single such polarization-dependent beam-splitter with the significant size reduction of the overall device footprint to 14 × 14 μm(2). The experimental demonstration of the highly integrated controlled-NOT gate sets the stage to develop large-scale quantum information processing system. Our hybrid design also establishes the new capabilities in controlling the polarization modes in integrated photonic circuits.
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
Avionic Data Bus Integration Technology
1991-12-01
address the hardware-software interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion ...the SCP. In 1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error... MULTIVERSION PROGRAMMING. N-version programming. 226 N-VERSION PROGRAMMING. The independent coding of a number, N, of redundant computer programs that
The Emerging Role of the Data Base Manager. Report No. R-1253-PR.
ERIC Educational Resources Information Center
Sawtelle, Thomas K.
The Air Force Logistics Command (AFLC) is revising and enhancing its data-processing capabilities with the development of a large-scale, multi-site, on-line, integrated data base information system known as the Advanced Logistics System (ALS). A data integrity program is to be built around a Data Base Manager (DBM), an individual or a group of…
Topics in programmable automation. [for materials handling, inspection, and assembly
NASA Technical Reports Server (NTRS)
Rosen, C. A.
1975-01-01
Topics explored in the development of integrated programmable automation systems include: numerically controlled and computer controlled machining; machine intelligence and the emulation of human-like capabilities; large scale semiconductor integration technology applications; and sensor technology for asynchronous local computation without burdening the executive minicomputer which controls the whole system. The role and development of training aids, and the potential application of these aids to augmented teleoperator systems are discussed.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
New Markets for Solar Photovoltaic Power Systems
NASA Astrophysics Data System (ADS)
Thomas, Chacko; Jennings, Philip; Singh, Dilawar
2007-10-01
Over the past five years solar photovoltaic (PV) power supply systems have matured and are now being deployed on a much larger scale. The traditional small-scale remote area power supply systems are still important and village electrification is also a large and growing market but large scale, grid-connected systems and building integrated systems are now being deployed in many countries. This growth has been aided by imaginative government policies in several countries and the overall result is a growth rate of over 40% per annum in the sales of PV systems. Optimistic forecasts are being made about the future of PV power as a major source of sustainable energy. Plans are now being formulated by the IEA for very large-scale PV installations of more than 100 MW peak output. The Australian Government has announced a subsidy for a large solar photovoltaic power station of 154 MW in Victoria, based on the concentrator technology developed in Australia. In Western Australia a proposal has been submitted to the State Government for a 2 MW photovoltaic power system to provide fringe of grid support at Perenjori. This paper outlines the technologies, designs, management and policies that underpin these exciting developments in solar PV power.
Large-scale quantum photonic circuits in silicon
NASA Astrophysics Data System (ADS)
Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk
2016-08-01
Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards large-scale source integration. Finally, we review monolithic integration strategies for single-photon detectors and their essential role in on-chip feed forward operations.
Systems Proteomics for Translational Network Medicine
Arrell, D. Kent; Terzic, Andre
2012-01-01
Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016
Imaging detectors and electronics—a view of the future
NASA Astrophysics Data System (ADS)
Spieler, Helmuth
2004-09-01
Imaging sensors and readout electronics have made tremendous strides in the past two decades. The application of modern semiconductor fabrication techniques and the introduction of customized monolithic integrated circuits have made large-scale imaging systems routine in high-energy physics. This technology is now finding its way into other areas, such as space missions, synchrotron light sources, and medical imaging. I review current developments and discuss the promise and limits of new technologies. Several detector systems are described as examples of future trends. The discussion emphasizes semiconductor detector systems, but I also include recent developments for large-scale superconducting detector arrays.
RAID-2: Design and implementation of a large scale disk array controller
NASA Technical Reports Server (NTRS)
Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.
1992-01-01
We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.
LARGE SCALE DISASTER ANALYSIS AND MANAGEMENT: SYSTEM LEVEL STUDY ON AN INTEGRATED MODEL
The increasing intensity and scale of human activity across the globe leading to severe depletion and deterioration of the Earth's natural resources has meant that sustainability has emerged as a new paradigm of analysis and management. Sustainability, conceptually defined by the...
Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
ESRI applications of GIS technology: Mineral resource development
NASA Technical Reports Server (NTRS)
Derrenbacher, W.
1981-01-01
The application of geographic information systems technology to large scale regional assessment related to mineral resource development, identifying candidate sites for related industry, and evaluating sites for waste disposal is discussed. Efforts to develop data bases were conducted at scales ranging from 1:3,000,000 to 1:25,000. In several instances, broad screening was conducted for large areas at a very general scale with more detailed studies subsequently undertaken in promising areas windowed out of the generalized data base. Increasingly, the systems which are developed are structured as the spatial framework for the long-term collection, storage, referencing, and retrieval of vast amounts of data about large regions. Typically, the reconnaissance data base for a large region is structured at 1:250,000 scale, data bases for smaller areas being structured at 1:25,000, 1:50,000 or 1:63,360. An integrated data base for the coterminous US was implemented at a scale of 1:3,000,000 for two separate efforts.
diCenzo, George C; Finan, Turlough M
2018-01-01
The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Interactive graphical computer-aided design system
NASA Technical Reports Server (NTRS)
Edge, T. M.
1975-01-01
System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.
NASA Astrophysics Data System (ADS)
Vanclooster, Marnik
2010-05-01
The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.
System design and integration of the large-scale advanced prop-fan
NASA Technical Reports Server (NTRS)
Huth, B. P.
1986-01-01
In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.
Wafer-scale pixelated detector system
Fahim, Farah; Deptuch, Grzegorz; Zimmerman, Tom
2017-10-17
A large area, gapless, detection system comprises at least one sensor; an interposer operably connected to the at least one sensor; and at least one application specific integrated circuit operably connected to the sensor via the interposer wherein the detection system provides high dynamic range while maintaining small pixel area and low power dissipation. Thereby the invention provides methods and systems for a wafer-scale gapless and seamless detector systems with small pixels, which have both high dynamic range and low power dissipation.
Teaching Scales in the Climate System: An example of interdisciplinary teaching and learning
NASA Astrophysics Data System (ADS)
Baehr, Johanna; Behrens, Jörn; Brüggemann, Michael; Frisius, Thomas; Glessmer, Mirjam S.; Hartmann, Jens; Hense, Inga; Kaleschke, Lars; Kutzbach, Lars; Rödder, Simone; Scheffran, Jürgen
2016-04-01
Climate change is commonly regarded as one of 21st century's grand challenges that needs to be addressed by conducting integrated research combining natural and social sciences. To meet this need, how to best train future climate researchers should be reconsidered. Here, we present our experience from a team-taught semester-long course with students of the international master program "Integrated Climate System Sciences" (ICSS) at the University of Hamburg, Germany. Ten lecturers with different backgrounds in physical, mathematical, biogeochemical and social sciences accompanied by a researcher trained in didactics prepared and regularly participated in a course which consisted of weekly classes. The foundation of the course was the use of the concept of 'scales' - climate varying on different temporal and spatial scales - by developing a joint definition of 'scales in the climate system' that is applicable in the natural sciences and in the social sciences. By applying this interdisciplinary definition of 'scales' to phenomena from all components of the climate system and the socio-economic dimensions, we aimed for an integrated description of the climate system. Following the concept of research-driven teaching and learning and using a variety of teaching techniques, the students designed their own scale diagram to illustrate climate-related phenomena in different disciplines. The highlight of the course was the presentation of individually developed scale diagrams by every student with all lecturers present. Based on the already conducted course, we currently re-design the course concept to be teachable by a similarly large group of lecturers but with alternating presence in class. With further refinement and also a currently ongoing documentation of the teaching material, we will continue to use the concept of 'scales' as a vehicle for teaching an integrated view of the climate system.
NASA Astrophysics Data System (ADS)
Darema, F.
2016-12-01
InfoSymbiotics/DDDAS embodies the power of Dynamic Data Driven Applications Systems (DDDAS), a concept whereby an executing application model is dynamically integrated, in a feed-back loop, with the real-time data-acquisition and control components, as well as other data sources of the application system. Advanced capabilities can be created through such new computational approaches in modeling and simulations, and in instrumentation methods, and include: enhancing the accuracy of the application model; speeding-up the computation to allow faster and more comprehensive models of a system, and create decision support systems with the accuracy of full-scale simulations; in addition, the notion of controlling instrumentation processes by the executing application results in more efficient management of application-data and addresses challenges of how to architect and dynamically manage large sets of heterogeneous sensors and controllers, an advance over the static and ad-hoc ways of today - with DDDAS these sets of resources can be managed adaptively and in optimized ways. Large-Scale-Dynamic-Data encompasses the next wave of Big Data, and namely dynamic data arising from ubiquitous sensing and control in engineered, natural, and societal systems, through multitudes of heterogeneous sensors and controllers instrumenting these systems, and where opportunities and challenges at these "large-scales" relate not only to data size but the heterogeneity in data, data collection modalities, fidelities, and timescales, ranging from real-time data to archival data. In tandem with this important dimension of dynamic data, there is an extended view of Big Computing, which includes the collective computing by networked assemblies of multitudes of sensors and controllers, this range from the high-end to the real-time seamlessly integrated and unified, and comprising the Large-Scale-Big-Computing. InfoSymbiotics/DDDAS engenders transformative impact in many application domains, ranging from the nano-scale to the terra-scale and to the extra-terra-scale. The talk will address opportunities for new capabilities together with corresponding research challenges, with illustrative examples from several application areas including environmental sciences, geosciences, and space sciences.
Stucky, Brian J; Guralnick, Rob; Deck, John; Denny, Ellen G; Bolmgren, Kjell; Walls, Ramona
2018-01-01
Plant phenology - the timing of plant life-cycle events, such as flowering or leafing out - plays a fundamental role in the functioning of terrestrial ecosystems, including human agricultural systems. Because plant phenology is often linked with climatic variables, there is widespread interest in developing a deeper understanding of global plant phenology patterns and trends. Although phenology data from around the world are currently available, truly global analyses of plant phenology have so far been difficult because the organizations producing large-scale phenology data are using non-standardized terminologies and metrics during data collection and data processing. To address this problem, we have developed the Plant Phenology Ontology (PPO). The PPO provides the standardized vocabulary and semantic framework that is needed for large-scale integration of heterogeneous plant phenology data. Here, we describe the PPO, and we also report preliminary results of using the PPO and a new data processing pipeline to build a large dataset of phenology information from North America and Europe.
2015-07-01
Reactive kVAR Kilo Watts kW Lithium Ion Li Ion Lithium-Titanate Oxide nLTO Natural gas NG Performance Objectives PO Photovoltaic PV Power ...cloud covered) periods. The demonstration features a large (relative to the overall system power requirements) photovoltaic solar array, whose inverter...microgrid with less expensive power storage instead of large scale energy storage and that the renewable energy with small-scale power storage can
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.
Park, Jongkil; Yu, Theodore; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert
2017-10-01
We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×10 7 synaptic events per second per 16k-neuron node in the hierarchy.
MacLeod, Miles; Nersessian, Nancy J
2015-02-01
In this paper we draw upon rich ethnographic data of two systems biology labs to explore the roles of explanation and understanding in large-scale systems modeling. We illustrate practices that depart from the goal of dynamic mechanistic explanation for the sake of more limited modeling goals. These processes use abstract mathematical formulations of bio-molecular interactions and data fitting techniques which we call top-down abstraction to trade away accurate mechanistic accounts of large-scale systems for specific information about aspects of those systems. We characterize these practices as pragmatic responses to the constraints many modelers of large-scale systems face, which in turn generate more limited pragmatic non-mechanistic forms of understanding of systems. These forms aim at knowledge of how to predict system responses in order to manipulate and control some aspects of them. We propose that this analysis of understanding provides a way to interpret what many systems biologists are aiming for in practice when they talk about the objective of a "systems-level understanding." Copyright © 2014 Elsevier Ltd. All rights reserved.
Yang, Tiefeng; Zheng, Biyuan; Wang, Zhen; Xu, Tao; Pan, Chen; Zou, Juan; Zhang, Xuehong; Qi, Zhaoyang; Liu, Hongjun; Feng, Yexin; Hu, Weida; Miao, Feng; Sun, Litao; Duan, Xiangfeng; Pan, Anlian
2017-12-04
High-quality two-dimensional atomic layered p-n heterostructures are essential for high-performance integrated optoelectronics. The studies to date have been largely limited to exfoliated and restacked flakes, and the controlled growth of such heterostructures remains a significant challenge. Here we report the direct van der Waals epitaxial growth of large-scale WSe 2 /SnS 2 vertical bilayer p-n junctions on SiO 2 /Si substrates, with the lateral sizes reaching up to millimeter scale. Multi-electrode field-effect transistors have been integrated on a single heterostructure bilayer. Electrical transport measurements indicate that the field-effect transistors of the junction show an ultra-low off-state leakage current of 10 -14 A and a highest on-off ratio of up to 10 7 . Optoelectronic characterizations show prominent photoresponse, with a fast response time of 500 μs, faster than all the directly grown vertical 2D heterostructures. The direct growth of high-quality van der Waals junctions marks an important step toward high-performance integrated optoelectronic devices and systems.
Chromatin Landscapes of Retroviral and Transposon Integration Profiles
Badhai, Jitendra; Rust, Alistair G.; Rad, Roland; Hilkens, John; Berns, Anton; van Lohuizen, Maarten; Wessels, Lodewyk F. A.; de Ridder, Jeroen
2014-01-01
The ability of retroviruses and transposons to insert their genetic material into host DNA makes them widely used tools in molecular biology, cancer research and gene therapy. However, these systems have biases that may strongly affect research outcomes. To address this issue, we generated very large datasets consisting of to unselected integrations in the mouse genome for the Sleeping Beauty (SB) and piggyBac (PB) transposons, and the Mouse Mammary Tumor Virus (MMTV). We analyzed (epi)genomic features to generate bias maps at both local and genome-wide scales. MMTV showed a remarkably uniform distribution of integrations across the genome. More distinct preferences were observed for the two transposons, with PB showing remarkable resemblance to bias profiles of the Murine Leukemia Virus. Furthermore, we present a model where target site selection is directed at multiple scales. At a large scale, target site selection is similar across systems, and defined by domain-oriented features, namely expression of proximal genes, proximity to CpG islands and to genic features, chromatin compaction and replication timing. Notable differences between the systems are mainly observed at smaller scales, and are directed by a diverse range of features. To study the effect of these biases on integration sites occupied under selective pressure, we turned to insertional mutagenesis (IM) screens. In IM screens, putative cancer genes are identified by finding frequently targeted genomic regions, or Common Integration Sites (CISs). Within three recently completed IM screens, we identified 7%–33% putative false positive CISs, which are likely not the result of the oncogenic selection process. Moreover, results indicate that PB, compared to SB, is more suited to tag oncogenes. PMID:24721906
Hynes, Denise M.; Perrin, Ruth A.; Rappaport, Steven; Stevens, Joanne M.; Demakis, John G.
2004-01-01
Information systems are increasingly important for measuring and improving health care quality. A number of integrated health care delivery systems use advanced information systems and integrated decision support to carry out quality assurance activities, but none as large as the Veterans Health Administration (VHA). The VHA's Quality Enhancement Research Initiative (QUERI) is a large-scale, multidisciplinary quality improvement initiative designed to ensure excellence in all areas where VHA provides health care services, including inpatient, outpatient, and long-term care settings. In this paper, we describe the role of information systems in the VHA QUERI process, highlight the major information systems critical to this quality improvement process, and discuss issues associated with the use of these systems. PMID:15187063
2011-08-01
design space is large. His research contributions are to the field of Decision-based Design, specifically in linking consumer preferences and...Integrating Consumer Preferences into Engineering Design, to be published in 2012. He received his PhD from Northwestern University in Mechanical
Large Scale Software Building with CMake in ATLAS
NASA Astrophysics Data System (ADS)
Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration
2017-10-01
The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
Photonic content-addressable memory system that uses a parallel-readout optical disk
NASA Astrophysics Data System (ADS)
Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.
1995-11-01
We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.
ROADNET: A Real-time Data Aware System for Earth, Oceanographic, and Environmental Applications
NASA Astrophysics Data System (ADS)
Vernon, F.; Hansen, T.; Lindquist, K.; Ludascher, B.; Orcutt, J.; Rajasekar, A.
2003-12-01
The Real-time Observatories, Application, and Data management Network (ROADNet) Program aims to develop an integrated, seamless, and transparent environmental information network that will deliver geophysical, oceanographic, hydrological, ecological, and physical data to a variety of users in real-time. ROADNet is a multidisciplinary, multinational partnership of researchers, policymakers, natural resource managers, educators, and students who aim to use the data to advance our understanding and management of coastal, ocean, riparian, and terrestrial Earth systems in Southern California, Mexico, and well off shore. To date, project activity and funding have focused on the design and deployment of network linkages and on the exploratory development of the real-time data management system. We are currently adapting powerful "Data Grid" technologies to the unique challenges associated with the management and manipulation of real-time data. Current "Grid" projects deal with static data files, and significant technical innovation is required to address fundamental problems of real-time data processing, integration, and distribution. The technologies developed through this research will create a system that dynamically adapt downstream processing, cataloging, and data access interfaces when sensors are added or removed from the system; provide for real-time processing and monitoring of data streams--detecting events, and triggering computations, sensor and logger modifications, and other actions; integrate heterogeneous data from multiple (signal) domains; and provide for large-scale archival and querying of "consolidated" data. The software tools which must be developed do not exist, although limited prototype systems are available. This research has implications for the success of large-scale NSF initiatives in the Earth sciences (EarthScope), ocean sciences (OOI- Ocean Observatories Initiative), biological sciences (NEON - National Ecological Observatory Network) and civil engineering (NEES - Network for Earthquake Engineering Simulation). Each of these large scale initiatives aims to collect real-time data from thousands of sensors, and each will require new technologies to process, manage, and communicate real-time multidisciplinary environmental data on regional, national, and global scales.
NASA Astrophysics Data System (ADS)
Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.
2015-05-01
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.
The Telecommunications and Data Acquisition Report
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1989-01-01
Deep Space Network advanced systems, very large scale integration architecture for decoders, radar interface and control units, microwave time delays, microwave antenna holography, and a radio frequency interference survey are among the topics discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de
2014-06-14
Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less
Large-Scale medical image analytics: Recent methodologies, applications and Future directions.
Zhang, Shaoting; Metaxas, Dimitris
2016-10-01
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.
Mems: Platform for Large-Scale Integrated Vacuum Electronic Circuits
2017-03-20
SECURITY CLASSIFICATION OF: The objective of the LIVEC advanced study project was to develop a platform for large-scale integrated vacuum electronic ...Distribution Unlimited UU UU UU UU 20-03-2017 1-Jul-2014 30-Jun-2015 Final Report: MEMS Platform for Large-Scale Integrated Vacuum Electronic ... Electronic Circuits (LIVEC) Contract No: W911NF-14-C-0093 COR Dr. James Harvey U.S. ARO RTP, NC 27709-2211 Phone: 702-696-2533 e-mail
NASA Astrophysics Data System (ADS)
Chan, YinThai
2016-03-01
Colloidal semiconductor nanocrystals are ideal fluorophores for clinical diagnostics, therapeutics, and highly sensitive biochip applications due to their high photostability, size-tunable color of emission and flexible surface chemistry. The relatively recent development of core-seeded semiconductor nanorods showed that the presence of a rod-like shell can confer even more advantageous physicochemical properties than their spherical counterparts, such as large multi-photon absorption cross-sections and facet-specific chemistry that can be exploited to deposit secondary nanoparticles. It may be envisaged that these highly fluorescent nanorods can be integrated with large scale integrated (LSI) microfluidic systems that allow miniaturization and integration of multiple biochemical processes in a single device at the nanoliter scale, resulting in a highly sensitive and automated detection platform. In this talk, I will describe a LSI microfluidic device that integrates RNA extraction, reverse transcription to cDNA, amplification and target pull-down to detect histidine decarboxylase (HDC) gene directly from human white blood cells samples. When anisotropic colloidal semiconductor nanorods (NRs) were used as the fluorescent readout, the detection limit was found to be 0.4 ng of total RNA, which was much lower than that obtained using spherical quantum dots (QDs) or organic dyes. This was attributed to the large action cross-section of NRs and their high probability of target capture in a pull-down detection scheme. The combination of large scale integrated microfluidics with highly fluorescent semiconductor NRs may find widespread utility in point-of-care devices and multi-target diagnostics.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choo, Jaegul; Kim, Hannah; Clarkson, Edward
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...
2018-01-31
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
KA-SB: from data integration to large scale reasoning
Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F
2009-01-01
Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402
An integrated probe design for measuring food quality in a microwave environment
NASA Astrophysics Data System (ADS)
O'Farrell, M.; Sheridan, C.; Lewis, E.; Zhao, W. Z.; Sun, T.; Grattan, K. T. V.
2007-07-01
The work presented describes the development of a novel integrated optical sensor system for the simultaneous and online measurement of the colour and temperature of food as it cooks in a large-scale microwave and hybrid oven systems. The integrated probe contains two different sensor concepts, one to monitor temperature and based on Fibre Bragg Grating (FBG) technology and a second for meat quality, based on reflection spectroscopy in the visible wavelength range. The combination of the two sensors into a single probe requires a careful configuration of the sensor approaches in the creation of an integrated probe design.
ERIC Educational Resources Information Center
Kharabe, Amol T.
2012-01-01
Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…
Integrative Systems Biology for Data Driven Knowledge Discovery
Greene, Casey S.; Troyanskaya, Olga G.
2015-01-01
Integrative systems biology is an approach that brings together diverse high throughput experiments and databases to gain new insights into biological processes or systems at molecular through physiological levels. These approaches rely on diverse high-throughput experimental techniques that generate heterogeneous data by assaying varying aspects of complex biological processes. Computational approaches are necessary to provide an integrative view of these experimental results and enable data-driven knowledge discovery. Hypotheses generated from these approaches can direct definitive molecular experiments in a cost effective manner. Using integrative systems biology approaches, we can leverage existing biological knowledge and large-scale data to improve our understanding of yet unknown components of a system of interest and how its malfunction leads to disease. PMID:21044756
VLSI technology for smaller, cheaper, faster return link systems
NASA Technical Reports Server (NTRS)
Nanzetta, Kathy; Ghuman, Parminder; Bennett, Toby; Solomon, Jeff; Dowling, Jason; Welling, John
1994-01-01
Very Large Scale Integration (VLSI) Application-specific Integrated Circuit (ASIC) technology has enabled substantially smaller, cheaper, and more capable telemetry data systems. However, the rapid growth in available ASIC fabrication densities has far outpaced the application of this technology to telemetry systems. Available densities have grown by well over an order magnitude since NASA's Goddard Space Flight Center (GSFC) first began developing ASIC's for ground telemetry systems in 1985. To take advantage of these higher integration levels, a new generation of ASIC's for return link telemetry processing is under development. These new submicron devices are designed to further reduce the cost and size of NASA return link processing systems while improving performance. This paper describes these highly integrated processing components.
Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector
NASA Astrophysics Data System (ADS)
Kumar, P.; Mishra, T.; Banerjee, R.
2017-12-01
India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.
Integrating complexity into data-driven multi-hazard supply chain network strategies
Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.
2013-01-01
Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.
WikiPEATia - a web based platform for assembling peatland data through ‘crowd sourcing’
NASA Astrophysics Data System (ADS)
Wisser, D.; Glidden, S.; Fieseher, C.; Treat, C. C.; Routhier, M.; Frolking, S. E.
2009-12-01
The Earth System Science community is realizing that peatlands are an important and unique terrestrial ecosystem that has not yet been well-integrated into large-scale earth system analyses. A major hurdle is the lack of accessible, geospatial data of peatland distribution, coupled with data on peatland properties (e.g., vegetation composition, peat depth, basal dates, soil chemistry, peatland class) at the global scale. This data, however, is available at the local scale. Although a comprehensive global database on peatlands probably lags similar data on more economically important ecosystems such as forests, grasslands, croplands, a large amount of field data have been collected over the past several decades. A few efforts have been made to map peatlands at large scales but existing data have not been assembled into a single geospatial database that is publicly accessible or do not depict data with a level of detail that is needed in the Earth System Science Community. A global peatland database would contribute to advances in a number of research fields such as hydrology, vegetation and ecosystem modeling, permafrost modeling, and earth system modeling. We present a Web 2.0 approach that uses state-of-the-art webserver and innovative online mapping technologies and is designed to create such a global database through ‘crowd-sourcing’. Primary functions of the online system include form-driven textual user input of peatland research metadata, spatial data input of peatland areas via a mapping interface, database editing and querying editing capabilities, as well as advanced visualization and data analysis tools. WikiPEATia provides an integrated information technology platform for assembling, integrating, and posting peatland-related geospatial datasets facilitates and encourages research community involvement. A successful effort will make existing peatland data much more useful to the research community, and will help to identify significant data gaps.
Integrated Data Modeling and Simulation on the Joint Polar Satellite System Program
NASA Technical Reports Server (NTRS)
Roberts, Christopher J.; Boyce, Leslye; Smith, Gary; Li, Angela; Barrett, Larry
2012-01-01
The Joint Polar Satellite System is a modern, large-scale, complex, multi-mission aerospace program, and presents a variety of design, testing and operational challenges due to: (1) System Scope: multi-mission coordination, role, responsibility and accountability challenges stemming from porous/ill-defined system and organizational boundaries (including foreign policy interactions) (2) Degree of Concurrency: design, implementation, integration, verification and operation occurring simultaneously, at multiple scales in the system hierarchy (3) Multi-Decadal Lifecycle: technical obsolesce, reliability and sustainment concerns, including those related to organizational and industrial base. Additionally, these systems tend to become embedded in the broader societal infrastructure, resulting in new system stakeholders with perhaps different preferences (4) Barriers to Effective Communications: process and cultural issues that emerge due to geographic dispersion and as one spans boundaries including gov./contractor, NASA/Other USG, and international relationships.
Integrating Bioregenerative Foods into the Exploration Spaceflight Food System
NASA Technical Reports Server (NTRS)
Douglas, Grace L.
2017-01-01
Food, the nutrition it provides, and the eating experiences surrounding it, are central to performance, health, and psychosocial wellbeing on long duration spaceflight missions. Exploration missions will require a spaceflight food system that is safe, nutritious, and acceptable for up to five years, possibly without cold storage. Many of the processed and packaged spaceflight foods currently used on the International Space Station will not retain acceptable quality or required levels of key nutrients under these conditions. The addition of bioregenerative produce to exploration missions may become an important countermeasure to the nutritional gaps and a resource to support psychosocial health. Bioregenerative produce will be central to establishment of Earth-independence as exploration extends deeper into space. However, bioregenerative foods introduce food safety and scarcity risks that must be eliminated prior to crew reliance on these systems. The pathway to Earth independence will require small-scale integration and validation prior to large scale bioregenerative dependence. Near term exploration missions offer the opportunity to establish small scale supplemental salad crop and fruit systems and validate infrastructure reliability, nutritional potential, and the psychosocial benefits necessary to promote further bioregenerative integration.
Finite-size scaling of eigenstate thermalization
NASA Astrophysics Data System (ADS)
Beugeling, W.; Moessner, R.; Haque, Masudul
2014-04-01
According to the eigenstate thermalization hypothesis (ETH), even isolated quantum systems can thermalize because the eigenstate-to-eigenstate fluctuations of typical observables vanish in the limit of large systems. Of course, isolated systems are by nature finite and the main way of computing such quantities is through numerical evaluation for finite-size systems. Therefore, the finite-size scaling of the fluctuations of eigenstate expectation values is a central aspect of the ETH. In this work, we present numerical evidence that for generic nonintegrable systems these fluctuations scale with a universal power law D-1/2 with the dimension D of the Hilbert space. We provide heuristic arguments, in the same spirit as the ETH, to explain this universal result. Our results are based on the analysis of three families of models and several observables for each model. Each family includes integrable members and we show how the system size where the universal power law becomes visible is affected by the proximity to integrability.
Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.
Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco
2018-06-07
Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High density circuit technology, part 2
NASA Technical Reports Server (NTRS)
Wade, T. E.
1982-01-01
A multilevel metal interconnection system for very large scale integration (VLSI) systems utilizing polyimides as the interlayer dielectric material is described. A complete characterization of polyimide materials is given as well as experimental methods accomplished using a double level metal test pattern. A low temperature, double exposure polyimide patterning procedure is also presented.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Chip-integrated optical power limiter based on an all-passive micro-ring resonator
NASA Astrophysics Data System (ADS)
Yan, Siqi; Dong, Jianji; Zheng, Aoling; Zhang, Xinliang
2014-10-01
Recent progress in silicon nanophotonics has dramatically advanced the possible realization of large-scale on-chip optical interconnects integration. Adopting photons as information carriers can break the performance bottleneck of electronic integrated circuit such as serious thermal losses and poor process rates. However, in integrated photonics circuits, few reported work can impose an upper limit of optical power therefore prevent the optical device from harm caused by high power. In this study, we experimentally demonstrate a feasible integrated scheme based on a single all-passive micro-ring resonator to realize the optical power limitation which has a similar function of current limiting circuit in electronics. Besides, we analyze the performance of optical power limiter at various signal bit rates. The results show that the proposed device can limit the signal power effectively at a bit rate up to 20 Gbit/s without deteriorating the signal. Meanwhile, this ultra-compact silicon device can be completely compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may pave the way of very large scale integrated photonic circuits for all-optical information processors and artificial intelligence systems.
Engineering management of large scale systems
NASA Technical Reports Server (NTRS)
Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.
1989-01-01
The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.
MAINTAINING DATA QUALITY IN THE PERFORMANCE OF A LARGE SCALE INTEGRATED MONITORING EFFORT
Macauley, John M. and Linda C. Harwell. In press. Maintaining Data Quality in the Performance of a Large Scale Integrated Monitoring Effort (Abstract). To be presented at EMAP Symposium 2004: Integrated Monitoring and Assessment for Effective Water Quality Management, 3-7 May 200...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-17
... Integrated Circuit Semiconductor Chips and Products Containing the Same; Notice of a Commission Determination... certain large scale integrated circuit semiconductor chips and products containing same by reason of... existence of a domestic industry. The Commission's notice of investigation named several respondents...
Very Large Scale Integration (VLSI).
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…
Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.; ...
2017-08-18
The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.
The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less
An Overview of NASA Efforts on Zero Boiloff Storage of Cryogenic Propellants
NASA Technical Reports Server (NTRS)
Hastings, Leon J.; Plachta, D. W.; Salerno, L.; Kittel, P.; Haynes, Davy (Technical Monitor)
2001-01-01
Future mission planning within NASA has increasingly motivated consideration of cryogenic propellant storage durations on the order of years as opposed to a few weeks or months. Furthermore, the advancement of cryocooler and passive insulation technologies in recent years has substantially improved the prospects for zero boiloff storage of cryogenics. Accordingly, a cooperative effort by NASA's Ames Research Center (ARC), Glenn Research Center (GRC), and Marshall Space Flight Center (MSFC) has been implemented to develop and demonstrate "zero boiloff" concepts for in-space storage of cryogenic propellants, particularly liquid hydrogen and oxygen. ARC is leading the development of flight-type cryocoolers, GRC the subsystem development and small scale testing, and MSFC the large scale and integrated system level testing. Thermal and fluid modeling involves a combined effort by the three Centers. Recent accomplishments include: 1) development of "zero boiloff" analytical modeling techniques for sizing the storage tankage, passive insulation, cryocooler, power source mass, and radiators; 2) an early subscale demonstration with liquid hydrogen 3) procurement of a flight-type 10 watt, 95 K pulse tube cryocooler for liquid oxygen storage and 4) assembly of a large-scale test article for an early demonstration of the integrated operation of passive insulation, destratification/pressure control, and cryocooler (commercial unit) subsystems to achieve zero boiloff storage of liquid hydrogen. Near term plans include the large-scale integrated system demonstration testing this summer, subsystem testing of the flight-type pulse-tube cryocooler with liquid nitrogen (oxygen simulant), and continued development of a flight-type liquid hydrogen pulse tube cryocooler.
Selection and Manufacturing of Membrane Materials for Solar Sails
NASA Technical Reports Server (NTRS)
Bryant, Robert G.; Seaman, Shane T.; Wilkie, W. Keats; Miyaucchi, Masahiko; Working, Dennis C.
2013-01-01
Commercial metallized polyimide or polyester films and hand-assembly techniques are acceptable for small solar sail technology demonstrations, although scaling this approach to large sail areas is impractical. Opportunities now exist to use new polymeric materials specifically designed for solar sailing applications, and take advantage of integrated sail manufacturing to enable large-scale solar sail construction. This approach has, in part, been demonstrated on the JAXA IKAROS solar sail demonstrator, and NASA Langley Research Center is now developing capabilities to produce ultrathin membranes for solar sails by integrating resin synthesis with film forming and sail manufacturing processes. This paper will discuss the selection and development of polymer material systems for space, and these new processes for producing ultrathin high-performance solar sail membrane films.
Materials Integration and Doping of Carbon Nanotube-based Logic Circuits
NASA Astrophysics Data System (ADS)
Geier, Michael
Over the last 20 years, extensive research into the structure and properties of single- walled carbon nanotube (SWCNT) has elucidated many of the exceptional qualities possessed by SWCNTs, including record-setting tensile strength, excellent chemical stability, distinctive optoelectronic features, and outstanding electronic transport characteristics. In order to exploit these remarkable qualities, many application-specific hurdles must be overcome before the material can be implemented in commercial products. For electronic applications, recent advances in sorting SWCNTs by electronic type have enabled significant progress towards SWCNT-based integrated circuits. Despite these advances, demonstrations of SWCNT-based devices with suitable characteristics for large-scale integrated circuits have been limited. The processing methodologies, materials integration, and mechanistic understanding of electronic properties developed in this dissertation have enabled unprecedented scales of SWCNT-based transistor fabrication and integrated circuit demonstrations. Innovative materials selection and processing methods are at the core of this work and these advances have led to transistors with the necessary transport properties required for modern circuit integration. First, extensive collaborations with other research groups allowed for the exploration of SWCNT thin-film transistors (TFTs) using a wide variety of materials and processing methods such as new dielectric materials, hybrid semiconductor materials systems, and solution-based printing of SWCNT TFTs. These materials were integrated into circuit demonstrations such as NOR and NAND logic gates, voltage-controlled ring oscillators, and D-flip-flops using both rigid and flexible substrates. This dissertation explores strategies for implementing complementary SWCNT-based circuits, which were developed by using local metal gate structures that achieve enhancement-mode p-type and n-type SWCNT TFTs with widely separated and symmetric threshold voltages. Additionally, a novel n-type doping procedure for SWCNT TFTs was also developed utilizing a solution-processed organometallic small molecule to demonstrate the first network top-gated n-type SWCNT TFTs. Lastly, new doping and encapsulation layers were incorporated to stabilize both p-type and n-type SWCNT TFT electronic properties, which enabled the fabrication of large-scale memory circuits. Employing these materials and processing advances has addressed many application specific barriers to commercialization. For instance, the first thin-film SWCNT complementary metal-oxide-semi-conductor (CMOS) logic devices are demonstrated with sub-nanowatt static power consumption and full rail-to-rail voltage transfer characteristics. With the introduction of a new n-type Rh-based molecular dopant, the first SWCNT TFTs are fabricated in top-gate geometries over large areas with high yield. Then by utilizing robust encapsulation methods, stable and uniform electronic performance of both p-type and n-type SWCNT TFTs has been achieved. Based on these complementary SWCNT TFTs, it is possible to simulate, design, and fabricate arrays of low-power static random access memory (SRAM) circuits, achieving large-scale integration for the first time based on solution-processed semiconductors. Together, this work provides a direct pathway for solution processable, large scale, power-efficient advanced integrated logic circuits and systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko
Long-range corrected density functional theory (LC-DFT) attracts many chemists’ attentions as a quantum chemical method to be applied to large molecular system and its property calculations. However, the expensive time cost to evaluate the long-range HF exchange is a big obstacle to be overcome to be applied to the large molecular systems and the solid state materials. Upon this problem, we propose a linear-scaling method of the HF exchange integration, in particular, for the LC-DFT hybrid functional.
NASA Technical Reports Server (NTRS)
Beatty, R.
1971-01-01
Metallization-related failure mechanisms were shown to be a major cause of integrated circuit failures under accelerated stress conditions, as well as in actual use under field operation. The integrated circuit industry is aware of the problem and is attempting to solve it in one of two ways: (1) better understanding of the aluminum system, which is the most widely used metallization material for silicon integrated circuits both as a single level and multilevel metallization, or (2) evaluating alternative metal systems. Aluminum metallization offers many advantages, but also has limitations particularly at elevated temperatures and high current densities. As an alternative, multilayer systems of the general form, silicon device-metal-inorganic insulator-metal, are being considered to produce large scale integrated arrays. The merits and restrictions of metallization systems in current usage and systems under development are defined.
NASA Astrophysics Data System (ADS)
Jiang, Shulan; Shi, Tielin; Liu, Dan; Long, Hu; Xi, Shuang; Wu, Fengshun; Li, Xiaoping; Xia, Qi; Tang, Zirong
2014-09-01
Large-scale three-dimensional (3D) hybrid microelectrodes have been fabricated through modified carbon microelectromechanical systems (Carbon-MEMS) process and electrochemical deposition method. Greatly improved electrochemical performance has been shown for the 3D photoresist-derived carbon microelectrodes with the integration of carbon nanotubes (CNTs) and manganese dioxide (MnO2). The electrochemical measurements of the microelectrodes indicate that the specific geometric capacitance can reach up to 238 mF cm-2 at the current density of 0.5 mA cm-2. The capacitance loss is less than 18.2% of the original value after 6000 charge-discharge cycles. This study shows that stacking of MnO2 film and integrating of CNTs to the 3D glassy carbon microelectrodes have great potential for on-chip microcapacitors as energy storage devices, and the presented approach is promising for large-scale and low-cost manufacturing.
Large-scale data integration framework provides a comprehensive view on glioblastoma multiforme.
Ovaska, Kristian; Laakso, Marko; Haapa-Paananen, Saija; Louhimo, Riku; Chen, Ping; Aittomäki, Viljami; Valo, Erkka; Núñez-Fontarnau, Javier; Rantanen, Ville; Karinen, Sirkku; Nousiainen, Kari; Lahesmaa-Korpinen, Anna-Maria; Miettinen, Minna; Saarinen, Lilli; Kohonen, Pekka; Wu, Jianmin; Westermarck, Jukka; Hautaniemi, Sampsa
2010-09-07
Coordinated efforts to collect large-scale data sets provide a basis for systems level understanding of complex diseases. In order to translate these fragmented and heterogeneous data sets into knowledge and medical benefits, advanced computational methods for data analysis, integration and visualization are needed. We introduce a novel data integration framework, Anduril, for translating fragmented large-scale data into testable predictions. The Anduril framework allows rapid integration of heterogeneous data with state-of-the-art computational methods and existing knowledge in bio-databases. Anduril automatically generates thorough summary reports and a website that shows the most relevant features of each gene at a glance, allows sorting of data based on different parameters, and provides direct links to more detailed data on genes, transcripts or genomic regions. Anduril is open-source; all methods and documentation are freely available. We have integrated multidimensional molecular and clinical data from 338 subjects having glioblastoma multiforme, one of the deadliest and most poorly understood cancers, using Anduril. The central objective of our approach is to identify genetic loci and genes that have significant survival effect. Our results suggest several novel genetic alterations linked to glioblastoma multiforme progression and, more specifically, reveal Moesin as a novel glioblastoma multiforme-associated gene that has a strong survival effect and whose depletion in vitro significantly inhibited cell proliferation. All analysis results are available as a comprehensive website. Our results demonstrate that integrated analysis and visualization of multidimensional and heterogeneous data by Anduril enables drawing conclusions on functional consequences of large-scale molecular data. Many of the identified genetic loci and genes having significant survival effect have not been reported earlier in the context of glioblastoma multiforme. Thus, in addition to generally applicable novel methodology, our results provide several glioblastoma multiforme candidate genes for further studies.Anduril is available at http://csbi.ltdk.helsinki.fi/anduril/The glioblastoma multiforme analysis results are available at http://csbi.ltdk.helsinki.fi/anduril/tcga-gbm/
Baby, André Rolim; Santoro, Diego Monegatto; Velasco, Maria Valéria Robles; Dos Reis Serra, Cristina Helena
2008-09-01
Introducing a pharmaceutical product on the market involves several stages of research. The scale-up stage comprises the integration of previous phases of development and their integration. This phase is extremely important since many process limitations which do not appear on the small scale become significant on the transposition to a large one. Since scientific literature presents only a few reports about the characterization of emulsified systems involving their scaling-up, this research work aimed at evaluating physical properties of non-ionic and anionic emulsions during their manufacturing phases: laboratory stage and scale-up. Prototype non-ionic (glyceryl monostearate) and anionic (potassium cetyl phosphate) emulsified systems had the physical properties by the determination of the droplet size (D[4,3], mum) and rheology profile. Transposition occurred from a batch of 500-50,000g. Semi-industrial manufacturing involved distinct conditions: intensity of agitation and homogenization. Comparing the non-ionic and anionic systems, it was observed that anionic emulsifiers generated systems with smaller droplet size and higher viscosity in laboratory scale. Besides that, for the concentrations tested, augmentation of the glyceryl monostearate emulsifier content provided formulations with better physical characteristics. For systems with potassium cetyl phosphate, droplet size increased with the elevation of the emulsifier concentration, suggesting inadequate stability. The scale-up provoked more significant alterations on the rheological profile and droplet size on the anionic systems than the non-ionic.
A Systematic Multi-Time Scale Solution for Regional Power Grid Operation
NASA Astrophysics Data System (ADS)
Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.
2017-10-01
Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... Integrated Circuit Semiconductor Chips and Products Containing Same; Notice of Investigation AGENCY: U.S... of certain large scale integrated circuit semiconductor chips and products containing same by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...
Integrated water and renewable energy management: the Acheloos-Peneios region case study
NASA Astrophysics Data System (ADS)
Koukouvinos, Antonios; Nikolopoulos, Dionysis; Efstratiadis, Andreas; Tegos, Aristotelis; Rozos, Evangelos; Papalexiou, Simon-Michael; Dimitriadis, Panayiotis; Markonis, Yiannis; Kossieris, Panayiotis; Tyralis, Christos; Karakatsanis, Georgios; Tzouka, Katerina; Christofides, Antonis; Karavokiros, George; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Within the ongoing research project "Combined Renewable Systems for Sustainable Energy Development" (CRESSENDO), we have developed a novel stochastic simulation framework for optimal planning and management of large-scale hybrid renewable energy systems, in which hydropower plays the dominant role. The methodology and associated computer tools are tested in two major adjacent river basins in Greece (Acheloos, Peneios) extending over 15 500 km2 (12% of Greek territory). River Acheloos is characterized by very high runoff and holds ~40% of the installed hydropower capacity of Greece. On the other hand, the Thessaly plain drained by Peneios - a key agricultural region for the national economy - usually suffers from water scarcity and systematic environmental degradation. The two basins are interconnected through diversion projects, existing and planned, thus formulating a unique large-scale hydrosystem whose future has been the subject of a great controversy. The study area is viewed as a hypothetically closed, energy-autonomous, system, in order to evaluate the perspectives for sustainable development of its water and energy resources. In this context we seek an efficient configuration of the necessary hydraulic and renewable energy projects through integrated modelling of the water and energy balance. We investigate several scenarios of energy demand for domestic, industrial and agricultural use, assuming that part of the demand is fulfilled via wind and solar energy, while the excess or deficit of energy is regulated through large hydroelectric works that are equipped with pumping storage facilities. The overall goal is to examine under which conditions a fully renewable energy system can be technically and economically viable for such large spatial scale.
A fully reconfigurable photonic integrated signal processor
NASA Astrophysics Data System (ADS)
Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping
2016-03-01
Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.
Transmission system protection screening for integration of offshore wind power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sajadi, A.; Strezoski, L.; Clark, K.
This paper develops an efficient methodology for protection screening of large-scale transmission systems as part of the planning studies for the integration of offshore wind power plants into the power grid. This methodology avails to determine whether any upgrades are required to the protection system. The uncertainty is considered in form of variability of the power generation by offshore wind power plant. This paper uses the integration of a 1000 MW offshore wind power plant operating in Lake Erie into the FirstEnergy/PJM service territory as a case study. This study uses a realistic model of a 63,000-bus test system thatmore » represents the U.S. Eastern Interconnection.« less
Transmission system protection screening for integration of offshore wind power plants
Sajadi, A.; Strezoski, L.; Clark, K.; ...
2018-02-21
This paper develops an efficient methodology for protection screening of large-scale transmission systems as part of the planning studies for the integration of offshore wind power plants into the power grid. This methodology avails to determine whether any upgrades are required to the protection system. The uncertainty is considered in form of variability of the power generation by offshore wind power plant. This paper uses the integration of a 1000 MW offshore wind power plant operating in Lake Erie into the FirstEnergy/PJM service territory as a case study. This study uses a realistic model of a 63,000-bus test system thatmore » represents the U.S. Eastern Interconnection.« less
Greenwald, Elliot; Masters, Matthew R; Thakor, Nitish V
2016-01-01
A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very large-scale integration has advanced the design of complex integrated circuits. System-on-chip devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems.
Bidirectional Neural Interfaces
Masters, Matthew R.; Thakor, Nitish V.
2016-01-01
A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very-large-scale integration (VLSI) has advanced the design of complex integrated circuits. System-on-chip (SoC) devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems. PMID:26753776
A Data Management System Integrating Web-Based Training and Randomized Trials
ERIC Educational Resources Information Center
Muroff, Jordana; Amodeo, Maryann; Larson, Mary Jo; Carey, Margaret; Loftin, Ralph D.
2011-01-01
This article describes a data management system (DMS) developed to support a large-scale randomized study of an innovative web-course that was designed to improve substance abuse counselors' knowledge and skills in applying a substance abuse treatment method (i.e., cognitive behavioral therapy; CBT). The randomized trial compared the performance…
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation
Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.
2000-01-01
Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.
NASA Technical Reports Server (NTRS)
Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.
2012-01-01
This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.
Watkins, David W; de Moraes, Márcia M G Alcoforado; Asbjornsen, Heidi; Mayer, Alex S; Licata, Julian; Lopez, Jose Gutierrez; Pypker, Thomas G; Molina, Vivianna Gamez; Marques, Guilherme Fernandes; Carneiro, Ana Cristina Guimaraes; Nuñez, Hector M; Önal, Hayri; da Nobrega Germano, Bruna
2015-12-01
Large-scale bioenergy production will affect the hydrologic cycle in multiple ways, including changes in canopy interception, evapotranspiration, infiltration, and the quantity and quality of surface runoff and groundwater recharge. As such, the water footprints of bioenergy sources vary significantly by type of feedstock, soil characteristics, cultivation practices, and hydro-climatic regime. Furthermore, water management implications of bioenergy production depend on existing land use, relative water availability, and competing water uses at a watershed scale. This paper reviews previous research on the water resource impacts of bioenergy production-from plot-scale hydrologic and nutrient cycling impacts to watershed and regional scale hydro-economic systems relationships. Primary gaps in knowledge that hinder policy development for integrated management of water-bioenergy systems are highlighted. Four case studies in the Americas are analyzed to illustrate relevant spatial and temporal scales for impact assessment, along with unique aspects of biofuel production compared to other agroforestry systems, such as energy-related conflicts and tradeoffs. Based on the case studies, the potential benefits of integrated resource management are assessed, as is the need for further case-specific research.
NASA Astrophysics Data System (ADS)
Watkins, David W.; de Moraes, Márcia M. G. Alcoforado; Asbjornsen, Heidi; Mayer, Alex S.; Licata, Julian; Lopez, Jose Gutierrez; Pypker, Thomas G.; Molina, Vivianna Gamez; Marques, Guilherme Fernandes; Carneiro, Ana Cristina Guimaraes; Nuñez, Hector M.; Önal, Hayri; da Nobrega Germano, Bruna
2015-12-01
Large-scale bioenergy production will affect the hydrologic cycle in multiple ways, including changes in canopy interception, evapotranspiration, infiltration, and the quantity and quality of surface runoff and groundwater recharge. As such, the water footprints of bioenergy sources vary significantly by type of feedstock, soil characteristics, cultivation practices, and hydro-climatic regime. Furthermore, water management implications of bioenergy production depend on existing land use, relative water availability, and competing water uses at a watershed scale. This paper reviews previous research on the water resource impacts of bioenergy production—from plot-scale hydrologic and nutrient cycling impacts to watershed and regional scale hydro-economic systems relationships. Primary gaps in knowledge that hinder policy development for integrated management of water-bioenergy systems are highlighted. Four case studies in the Americas are analyzed to illustrate relevant spatial and temporal scales for impact assessment, along with unique aspects of biofuel production compared to other agroforestry systems, such as energy-related conflicts and tradeoffs. Based on the case studies, the potential benefits of integrated resource management are assessed, as is the need for further case-specific research.
Evolution of the Tropical Cyclone Integrated Data Exchange And Analysis System (TC-IDEAS)
NASA Technical Reports Server (NTRS)
Turk, J.; Chao, Y.; Haddad, Z.; Hristova-Veleva, S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Licata, S.; Poulsen, W.; Su, H.;
2010-01-01
The Tropical Cyclone Integrated Data Exchange and Analysis System (TC-IDEAS) is being jointly developed by the Jet Propulsion Laboratory (JPL) and the Marshall Space Flight Center (MSFC) as part of NASA's Hurricane Science Research Program. The long-term goal is to create a comprehensive tropical cyclone database of satellite and airborne observations, in-situ measurements and model simulations containing parameters that pertain to the thermodynamic and microphysical structure of the storms; the air-sea interaction processes; and the large-scale environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauder, C.
This subcontract report was completed under the auspices of the NREL/SCE High-Penetration Photovoltaic (PV) Integration Project, which is co-funded by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and the California Solar Initiative (CSI) Research, Development, Demonstration, and Deployment (RD&D) program funded by the California Public Utility Commission (CPUC) and managed by Itron. This project is focused on modeling, quantifying, and mitigating the impacts of large utility-scale PV systems (generally 1-5 MW in size) that are interconnected to the distribution system. This report discusses the concerns utilities have when interconnecting large PV systems thatmore » interconnect using PV inverters (a specific application of frequency converters). Additionally, a number of capabilities of PV inverters are described that could be implemented to mitigate the distribution system-level impacts of high-penetration PV integration. Finally, the main issues that need to be addressed to ease the interconnection of large PV systems to the distribution system are presented.« less
Computational singular perturbation analysis of stochastic chemical systems with stiffness
NASA Astrophysics Data System (ADS)
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.
2017-04-01
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
HIV scale-up in Mozambique: Exceptionalism, normalisation and global health
Høg, Erling
2014-01-01
The large-scale introduction of HIV and AIDS services in Mozambique from 2000 onwards occurred in the context of deep political commitment to sovereign nation-building and an important transition in the nation's health system. Simultaneously, the international community encountered a willing state partner that recognised the need to take action against the HIV epidemic. This article examines two critical policy shifts: sustained international funding and public health system integration (the move from parallel to integrated HIV services). The Mozambican government struggles to support its national health system against privatisation, NGO competition and internal brain drain. This is a sovereignty issue. However, the dominant discourse on self-determination shows a contradictory twist: it is part of the political rhetoric to keep the sovereignty discourse alive, while the real challenge is coordination, not partnerships. Nevertheless, we need more anthropological studies to understand the political implications of global health funding and governance. Other studies need to examine the consequences of public health system integration for the quality of access to health care. PMID:24499102
HIV scale-up in Mozambique: exceptionalism, normalisation and global health.
Høg, Erling
2014-01-01
The large-scale introduction of HIV and AIDS services in Mozambique from 2000 onwards occurred in the context of deep political commitment to sovereign nation-building and an important transition in the nation's health system. Simultaneously, the international community encountered a willing state partner that recognised the need to take action against the HIV epidemic. This article examines two critical policy shifts: sustained international funding and public health system integration (the move from parallel to integrated HIV services). The Mozambican government struggles to support its national health system against privatisation, NGO competition and internal brain drain. This is a sovereignty issue. However, the dominant discourse on self-determination shows a contradictory twist: it is part of the political rhetoric to keep the sovereignty discourse alive, while the real challenge is coordination, not partnerships. Nevertheless, we need more anthropological studies to understand the political implications of global health funding and governance. Other studies need to examine the consequences of public health system integration for the quality of access to health care.
Costs and cost-effectiveness of vector control in Eritrea using insecticide-treated bed nets.
Yukich, Joshua O; Zerom, Mehari; Ghebremeskel, Tewolde; Tediosi, Fabrizio; Lengeler, Christian
2009-03-30
While insecticide-treated nets (ITNs) are a recognized effective method for preventing malaria, there has been an extensive debate in recent years about the best large-scale implementation strategy. Implementation costs and cost-effectiveness are important elements to consider when planning ITN programmes, but so far little information on these aspects is available from national programmes. This study uses a standardized methodology, as part of a larger comparative study, to collect cost data and cost-effectiveness estimates from a large programme providing ITNs at the community level and ante-natal care facilities in Eritrea. This is a unique model of ITN implementation fully integrated into the public health system. Base case analysis results indicated that the average annual cost of ITN delivery (2005 USD 3.98) was very attractive when compared with past ITN delivery studies at different scales. Financing was largely from donor sources though the Eritrean government and net users also contributed funding. The intervention's cost-effectiveness was in a highly attractive range for sub-Saharan Africa. The cost per DALY averted was USD 13 - 44. The cost per death averted was USD 438-1449. Distribution of nets coincided with significant increases in coverage and usage of nets nationwide, approaching or exceeding international targets in some areas. ITNs can be cost-effectively delivered at a large scale in sub-Saharan Africa through a distribution system that is highly integrated into the health system. Operating and sustaining such a system still requires strong donor funding and support as well as a functional and extensive system of health facilities and community health workers already in place.
Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs
NASA Astrophysics Data System (ADS)
Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.
2015-03-01
This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.
Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning
2007-01-01
The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.
Slow dynamics in translation-invariant quantum lattice models
NASA Astrophysics Data System (ADS)
Michailidis, Alexios A.; Žnidarič, Marko; Medvedyeva, Mariya; Abanin, Dmitry A.; Prosen, Tomaž; Papić, Z.
2018-03-01
Many-body quantum systems typically display fast dynamics and ballistic spreading of information. Here we address the open problem of how slow the dynamics can be after a generic breaking of integrability by local interactions. We develop a method based on degenerate perturbation theory that reveals slow dynamical regimes and delocalization processes in general translation invariant models, along with accurate estimates of their delocalization time scales. Our results shed light on the fundamental questions of the robustness of quantum integrable systems and the possibility of many-body localization without disorder. As an example, we construct a large class of one-dimensional lattice models where, despite the absence of asymptotic localization, the transient dynamics is exceptionally slow, i.e., the dynamics is indistinguishable from that of many-body localized systems for the system sizes and time scales accessible in experiments and numerical simulations.
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey
2001-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
The large-scale organization of metabolic networks
NASA Astrophysics Data System (ADS)
Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.
2000-10-01
In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.
Integrity of Bolted Angle Connections Subjected to Simulated Column Removal
Weigand, Jonathan M.; Berman, Jeffrey W.
2016-01-01
Large-scale tests of steel gravity framing systems (SGFSs) have shown that the connections are critical to the system integrity, when a column suffers damage that compromises its ability to carry gravity loads. When supporting columns were removed, the SGFSs redistributed gravity loads through the development of an alternate load path in a sustained tensile configuration resulting from large vertical deflections. The ability of the system to sustain such an alternate load path depends on the capacity of the gravity connections to remain intact after undergoing large rotation and axial extension demands, for which they were not designed. This study experimentally evaluates the performance of steel bolted angle connections subjected to loading consistent with an interior column removal. The characteristic connection behaviors are described and the performance of multiple connection configurations are compared in terms of their peak resistances and deformation capacities. PMID:27110059
Kuster, Diederik W D; Merkus, Daphne; van der Velden, Jolanda; Verhoeven, Adrie J M; Duncker, Dirk J
2011-01-01
Since the completion of the Human Genome Project and the advent of the large scaled unbiased ‘-omics’ techniques, the field of systems biology has emerged. Systems biology aims to move away from the traditional reductionist molecular approach, which focused on understanding the role of single genes or proteins, towards a more holistic approach by studying networks and interactions between individual components of networks. From a conceptual standpoint, systems biology elicits a ‘back to the future’ experience for any integrative physiologist. However, many of the new techniques and modalities employed by systems biologists yield tremendous potential for integrative physiologists to expand their tool arsenal to (quantitatively) study complex biological processes, such as cardiac remodelling and heart failure, in a truly holistic fashion. We therefore advocate that systems biology should not become/stay a separate discipline with ‘-omics’ as its playing field, but should be integrated into physiology to create ‘Integrative Physiology 2.0’. PMID:21224228
NASA Technical Reports Server (NTRS)
Singh, Mrityunjay
2010-01-01
Advanced ceramic integration technologies dramatically impact the energy landscape due to wide scale application of ceramics in all aspects of alternative energy production, storage, distribution, conservation, and efficiency. Examples include fuel cells, thermoelectrics, photovoltaics, gas turbine propulsion systems, distribution and transmission systems based on superconductors, nuclear power generation and waste disposal. Ceramic integration technologies play a key role in fabrication and manufacturing of large and complex shaped parts with multifunctional properties. However, the development of robust and reliable integrated systems with optimum performance requires the understanding of many thermochemical and thermomechanical factors, particularly for high temperature applications. In this presentation, various needs, challenges, and opportunities in design, fabrication, and testing of integrated similar (ceramic ceramic) and dissimilar (ceramic metal) material www.nasa.gov 45 ceramic-ceramic-systems have been discussed. Experimental results for bonding and integration of SiC based Micro-Electro-Mechanical-Systems (MEMS) LDI fuel injector and advanced ceramics and composites for gas turbine applications are presented.
Computer-aided design of large-scale integrated circuits - A concept
NASA Technical Reports Server (NTRS)
Schansman, T. T.
1971-01-01
Circuit design and mask development sequence are improved by using general purpose computer with interactive graphics capability establishing efficient two way communications link between design engineer and system. Interactive graphics capability places design engineer in direct control of circuit development.
NASA Technical Reports Server (NTRS)
Pokhrel, Yadu N.; Hanasaki, Naota; Wada, Yoshihide; Kim, Hyungjun
2016-01-01
The global water cycle has been profoundly affected by human land-water management. As the changes in the water cycle on land can affect the functioning of a wide range of biophysical and biogeochemical processes of the Earth system, it is essential to represent human land-water management in Earth system models (ESMs). During the recent past, noteworthy progress has been made in large-scale modeling of human impacts on the water cycle but sufficient advancements have not yet been made in integrating the newly developed schemes into ESMs. This study reviews the progresses made in incorporating human factors in large-scale hydrological models and their integration into ESMs. The study focuses primarily on the recent advancements and existing challenges in incorporating human impacts in global land surface models (LSMs) as a way forward to the development of ESMs with humans as integral components, but a brief review of global hydrological models (GHMs) is also provided. The study begins with the general overview of human impacts on the water cycle. Then, the algorithms currently employed to represent irrigation, reservoir operation, and groundwater pumping are discussed. Next, methodological deficiencies in current modeling approaches and existing challenges are identified. Furthermore, light is shed on the sources of uncertainties associated with model parameterizations, grid resolution, and datasets used for forcing and validation. Finally, representing human land-water management in LSMs is highlighted as an important research direction toward developing integrated models using ESM frameworks for the holistic study of human-water interactions within the Earths system.
eScience for molecular-scale simulations and the eMinerals project.
Salje, E K H; Artacho, E; Austen, K F; Bruin, R P; Calleja, M; Chappell, H F; Chiang, G-T; Dove, M T; Frame, I; Goodwin, A L; Kleese van Dam, K; Marmier, A; Parker, S C; Pruneda, J M; Todorov, I T; Trachenko, K; Tyer, R P; Walker, A M; White, T O H
2009-03-13
We review the work carried out within the eMinerals project to develop eScience solutions that facilitate a new generation of molecular-scale simulation work. Technological developments include integration of compute and data systems, developing of collaborative frameworks and new researcher-friendly tools for grid job submission, XML data representation, information delivery, metadata harvesting and metadata management. A number of diverse science applications will illustrate how these tools are being used for large parameter-sweep studies, an emerging type of study for which the integration of computing, data and collaboration is essential.
NASA Technical Reports Server (NTRS)
Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.
1987-01-01
This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.
Research on unit commitment with large-scale wind power connected power system
NASA Astrophysics Data System (ADS)
Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing
2017-01-01
Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
A procedural method for the efficient implementation of full-custom VLSI designs
NASA Technical Reports Server (NTRS)
Belk, P.; Hickey, N.
1987-01-01
An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.
Professionalism, bureaucracy and patriotism: the VA as a health care megasystem.
Rosenheck, R
The Veterans Administration supports the largest integrated psychiatry service in the country. As our oldest and largest "megasystem," this service offers a unique opportunity for examining distinctive features of such large health care delivery systems. Characteristic experiences of mental health professionals in this system are described and the system is analyzed in terms of its organizational tasks, structure and cultures. Psychiatry will be practiced, in the future, in similarly large scale organizations. Understanding the nature and workings of such organizations is likely to become essential to effective and satisfying professional work.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
YF-12 cooperative airframe/propulsion control system program, volume 1
NASA Technical Reports Server (NTRS)
Anderson, D. L.; Connolly, G. F.; Mauro, F. M.; Reukauf, P. J.; Marks, R. (Editor)
1980-01-01
Several YF-12C airplane analog control systems were converted to a digital system. Included were the air data computer, autopilot, inlet control system, and autothrottle systems. This conversion was performed to allow assessment of digital technology applications to supersonic cruise aircraft. The digital system was composed of a digital computer and specialized interface unit. A large scale mathematical simulation of the airplane was used for integration testing and software checkout.
Commercial-scale biotherapeutics manufacturing facility for plant-made pharmaceuticals.
Holtz, Barry R; Berquist, Brian R; Bennett, Lindsay D; Kommineni, Vally J M; Munigunti, Ranjith K; White, Earl L; Wilkerson, Don C; Wong, Kah-Yat I; Ly, Lan H; Marcel, Sylvain
2015-10-01
Rapid, large-scale manufacture of medical countermeasures can be uniquely met by the plant-made-pharmaceutical platform technology. As a participant in the Defense Advanced Research Projects Agency (DARPA) Blue Angel project, the Caliber Biotherapeutics facility was designed, constructed, commissioned and released a therapeutic target (H1N1 influenza subunit vaccine) in <18 months from groundbreaking. As of 2015, this facility was one of the world's largest plant-based manufacturing facilities, with the capacity to process over 3500 kg of plant biomass per week in an automated multilevel growing environment using proprietary LED lighting. The facility can commission additional plant grow rooms that are already built to double this capacity. In addition to the commercial-scale manufacturing facility, a pilot production facility was designed based on the large-scale manufacturing specifications as a way to integrate product development and technology transfer. The primary research, development and manufacturing system employs vacuum-infiltrated Nicotiana benthamiana plants grown in a fully contained, hydroponic system for transient expression of recombinant proteins. This expression platform has been linked to a downstream process system, analytical characterization, and assessment of biological activity. This integrated approach has demonstrated rapid, high-quality production of therapeutic monoclonal antibody targets, including a panel of rituximab biosimilar/biobetter molecules and antiviral antibodies against influenza and dengue fever. © 2015 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
A Commercialization Roadmap for Carbon-Negative Energy Systems
NASA Astrophysics Data System (ADS)
Sanchez, D.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.
1983-01-01
Physique de l’Atmosphire et Environnement terrestre 71 09 - Information, Documentation et Informatique 74 10 - Thimes gin~raux (pluridisciplinaires) et...March Louisiana (US) Fiber Communication Optical Communications IEEE Fibre Optics Electro-Optics 02-08 7-9 March Baden-Baden VDE -IEEE Specialists...Conference on Very Large Electronic Systems VDE (GE) Scale Integrated Circuits Solid State Devices IEEE Integrated Circuits Engineering Design Fabrication
Multi -omics and metabolic modelling pipelines: challenges and tools for systems microbiology.
Fondi, Marco; Liò, Pietro
2015-02-01
Integrated -omics approaches are quickly spreading across microbiology research labs, leading to (i) the possibility of detecting previously hidden features of microbial cells like multi-scale spatial organization and (ii) tracing molecular components across multiple cellular functional states. This promises to reduce the knowledge gap between genotype and phenotype and poses new challenges for computational microbiologists. We underline how the capability to unravel the complexity of microbial life will strongly depend on the integration of the huge and diverse amount of information that can be derived today from -omics experiments. In this work, we present opportunities and challenges of multi -omics data integration in current systems biology pipelines. We here discuss which layers of biological information are important for biotechnological and clinical purposes, with a special focus on bacterial metabolism and modelling procedures. A general review of the most recent computational tools for performing large-scale datasets integration is also presented, together with a possible framework to guide the design of systems biology experiments by microbiologists. Copyright © 2015. Published by Elsevier GmbH.
Transforming Power Systems; 21st Century Power Partnership
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-05-20
The 21st Century Power Partnership - a multilateral effort of the Clean Energy Ministerial - serves as a platform for public-private collaboration to advance integrated solutions for the large-scale deployment of renewable energy in combination with deep energy ef?ciency and smart grid solutions.
Energy Systems Integration Partnerships: NREL + Giner
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-22
This fact sheet highlights work done at the ESIF in partnership with Giner. Giner, a developer of proton-exchange membrane (PEM) technologies, has contracted with NREL to validate the performance of its large-scale PEM electrolyzer stacks. PEM electrolyzers work much like fuel cells run in reverse.
Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.
Demchak, Barry; Krüger, Ingolf
2012-07-01
The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.
Auxiliary basis expansions for large-scale electronic structure calculations.
Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin
2005-05-10
One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.
Rapid underway profiling of water quality in Queensland estuaries.
Hodge, Jonathan; Longstaff, Ben; Steven, Andy; Thornton, Phillip; Ellis, Peter; McKelvie, Ian
2005-01-01
We present an overview of a portable underway water quality monitoring system (RUM-Rapid Underway Monitoring), developed by integrating several off-the-shelf water quality instruments to provide rapid, comprehensive, and spatially referenced 'snapshots' of water quality conditions. We demonstrate the utility of the system from studies in the Northern Great Barrier Reef (Daintree River) and the Moreton Bay region. The Brisbane dataset highlights RUM's utility in characterising plumes as well as its ability to identify the smaller scale structure of large areas. RUM is shown to be particularly useful when measuring indicators with large small-scale variability such as turbidity and chlorophyll-a. Additionally, the Daintree dataset shows the ability to integrate other technologies, resulting in a more comprehensive analysis, whilst sampling offshore highlights some of the analytical issues required for sampling low concentration data. RUM is a low cost, highly flexible solution that can be modified for use in any water type, on most vessels and is only limited by the available monitoring technologies.
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...
2016-08-10
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less
A monolithically integrated polarization entangled photon pair source on a silicon chip
Matsuda, Nobuyuki; Le Jeannic, Hanna; Fukuda, Hiroshi; Tsuchizawa, Tai; Munro, William John; Shimizu, Kaoru; Yamada, Koji; Tokura, Yasuhiro; Takesue, Hiroki
2012-01-01
Integrated photonic circuits are one of the most promising platforms for large-scale photonic quantum information systems due to their small physical size and stable interferometers with near-perfect lateral-mode overlaps. Since many quantum information protocols are based on qubits defined by the polarization of photons, we must develop integrated building blocks to generate, manipulate, and measure the polarization-encoded quantum state on a chip. The generation unit is particularly important. Here we show the first integrated polarization-entangled photon pair source on a chip. We have implemented the source as a simple and stable silicon-on-insulator photonic circuit that generates an entangled state with 91 ± 2% fidelity. The source is equipped with versatile interfaces for silica-on-silicon or other types of waveguide platforms that accommodate the polarization manipulation and projection devices as well as pump light sources. Therefore, we are ready for the full-scale implementation of photonic quantum information systems on a chip. PMID:23150781
Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.
Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees
2016-02-01
As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.
Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu
2016-01-01
Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. © 2015 Wiley Periodicals, Inc.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations
NASA Astrophysics Data System (ADS)
Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean
2017-10-01
Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.
Quantum chaos inside black holes
NASA Astrophysics Data System (ADS)
Addazi, Andrea
2017-06-01
We show how semiclassical black holes can be reinterpreted as an effective geometry, composed of a large ensemble of horizonless naked singularities (eventually smoothed at the Planck scale). We call these new items frizzy-balls, which can be rigorously defined by Euclidean path integral approach. This leads to interesting implications about information paradoxes. We demonstrate that infalling information will chaotically propagate inside this system before going to the full quantum gravity regime (Planck scale).
The CELSS breadboard project: Plant production
NASA Technical Reports Server (NTRS)
Knott, William M.
1990-01-01
NASA's Breadboard Project for the Controlled Ecological Life Support System (CELSS) program is described. The simplified schematic of a CELSS is given. A modular approach is taken to building the CELSS Breadboard. Each module is researched in order to develop a data set for each one prior to its integration into the complete system. The data being obtained from the Biomass Production Module or the Biomass Production Chamber is examined. The other primary modules, food processing and resource recovery or waste management, are discussed briefly. The crew habitat module is not discussed. The primary goal of the Breadboard Project is to scale-up research data to an integrated system capable of supporting one person in order to establish feasibility for the development and operation of a CELSS. Breadboard is NASA's first attempt at developing a large scale CELSS.
Organic field effect transistor with ultra high amplification
NASA Astrophysics Data System (ADS)
Torricelli, Fabrizio
2016-09-01
High-gain transistors are essential for the large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show organic transistors fabricated on plastic foils enabling unipolar amplifiers with ultra-gain. The proposed approach is general and opens up new opportunities for ultra-large signal amplification in organic circuits and sensors.
Power generation in random diode arrays
NASA Astrophysics Data System (ADS)
Shvydka, Diana; Karpov, V. G.
2005-03-01
We discuss nonlinear disordered systems, random diode arrays (RDAs), which can represent such objects as large-area photovoltaics and ion channels of biological membranes. Our numerical modeling has revealed several interesting properties of RDAs. In particular, the geometrical distribution of nonuniformities across a RDA has only a minor effect on its integral characteristics determined by RDA parameter statistics. In the meantime, the dispersion of integral characteristics vs system size exhibits a nontrivial scaling dependence. Our theoretical interpretation here remains limited and is based on the picture of eddy currents flowing through weak diodes in the RDA.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
The scientific data acquisition system of the GAMMA-400 space project
NASA Astrophysics Data System (ADS)
Bobkov, S. G.; Serdin, O. V.; Gorbunov, M. S.; Arkhangelskiy, A. I.; Topchiev, N. P.
2016-02-01
The description of scientific data acquisition system (SDAS) designed by SRISA for the GAMMA-400 space project is presented. We consider the problem of different level electronics unification: the set of reliable fault-tolerant integrated circuits fabricated on Silicon-on-Insulator 0.25 mkm CMOS technology and the high-speed interfaces and reliable modules used in the space instruments. The characteristics of reliable fault-tolerant very large scale integration (VLSI) technology designed by SRISA for the developing of computation systems for space applications are considered. The scalable net structure of SDAS based on Serial RapidIO interface including real-time operating system BAGET is described too.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Enterprise PACS and image distribution.
Huang, H K
2003-01-01
Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.
USDA-ARS?s Scientific Manuscript database
Tomato Functional Genomics Database (TFGD; http://ted.bti.cornell.edu) provides a comprehensive systems biology resource to store, mine, analyze, visualize and integrate large-scale tomato functional genomics datasets. The database is expanded from the previously described Tomato Expression Database...
Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi
2018-06-18
Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
A model for decentralised grey wastewater treatment system in Singapore public housing.
Lim, J; Jern, Ng Wun; Chew, K L; Kallianpur, V
2002-01-01
Global concerns over the sustainable use of natural resources provided the impetus for research into water reclamation from wastewater within the Singapore context. The objective of the research is to study and develop a water infrastructure system as an integral element of architecture and the urbanscape, thereby reducing the need for the large area requirements associated with centralised treatment plants. The decentralised plants were considered so as to break up the large contiguous plot of land otherwise needed, into smaller integrated fragments, which can be incorporated within the housing scheme. This liberated more usable space on the ground plane of the urban housing master plan, enabling water-edge and waterscape relationships within both the private and public domains of varying scale.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
A Human Systems Integration Approach to Energy Efficiency in Ground Transportation
2015-12-01
Granite Construction Organizational Structure .........................................53 Figure 7. A Comparison of USMC Structure to Granite Construction...Caterpillar Corporation and the implementation and use of their telematics systems within a company called Granite Construction. Granite Construction...profit over 250 million dollars annually. In addition, similar to the USMC, Granite Construction handles both large and small scale projects in a
Interdisciplinary Team Science in Cell Biology.
Horwitz, Rick
2016-11-01
The cell is complex. With its multitude of components, spatial-temporal character, and gene expression diversity, it is challenging to comprehend the cell as an integrated system and to develop models that predict its behaviors. I suggest an approach to address this issue, involving system level data analysis, large scale team science, and philanthropy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Strategies for Validation Testing of Ground Systems
NASA Technical Reports Server (NTRS)
Annis, Tammy; Sowards, Stephanie
2009-01-01
In order to accomplish the full Vision for Space Exploration announced by former President George W. Bush in 2004, NASA will have to develop a new space transportation system and supporting infrastructure. The main portion of this supporting infrastructure will reside at the Kennedy Space Center (KSC) in Florida and will either be newly developed or a modification of existing vehicle processing and launch facilities, including Ground Support Equipment (GSE). This type of large-scale launch site development is unprecedented since the time of the Apollo Program. In order to accomplish this successfully within the limited budget and schedule constraints a combination of traditional and innovative strategies for Verification and Validation (V&V) have been developed. The core of these strategies consists of a building-block approach to V&V, starting with component V&V and ending with a comprehensive end-to-end validation test of the complete launch site, called a Ground Element Integration Test (GEIT). This paper will outline these strategies and provide the high level planning for meeting the challenges of implementing V&V on a large-scale development program. KEY WORDS: Systems, Elements, Subsystem, Integration Test, Ground Systems, Ground Support Equipment, Component, End Item, Test and Verification Requirements (TVR), Verification Requirements (VR)
NASA Astrophysics Data System (ADS)
Maxwell, R. M.; Condon, L. E.; Kollet, S. J.
2013-12-01
Groundwater is an important component of the hydrologic cycle yet its importance is often overlooked. Aquifers are a critical water resource, particularly in irrigation, but also participates in moderating the land-energy balance over the so-called critical zone of 2-10m in water table depth. Yet,the scaling behavior of groundwater is not well known. Here, we present the results of a fully-integrated hydrologic model run over a 6.3M km2 domain that covers much of North America focused on the continental United States. This model encompasses both the Mississippi and Colorado River watersheds in their entirety at 1km resolution and is constructed using the fully-integrated groundwater-vadose zone-surface water-land surface model, ParFlow. Results from this work are compared to observations (both of surface water flow and groundwater depths) and approaches are presented for observing of these integrated systems. Furthermore, results are used to understand the scaling behavior of groundwater over the continent at high resolution. Implications for understanding dominant hydrological processes at large scales will be discussed.
Chip-scale integrated optical interconnects: a key enabler for future high-performance computing
NASA Astrophysics Data System (ADS)
Haney, Michael; Nair, Rohit; Gu, Tian
2012-01-01
High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eranki, Pragnya L.; Manowitz, David H.; Bals, Bryan D.
An array of feedstock is being evaluated as potential raw material for cellulosic biofuel production. Thorough assessments are required in regional landscape settings before these feedstocks can be cultivated and sustainable management practices can be implemented. On the processing side, a potential solution to the logistical challenges of large biorefi neries is provided by a network of distributed processing facilities called local biomass processing depots. A large-scale cellulosic ethanol industry is likely to emerge soon in the United States. We have the opportunity to influence the sustainability of this emerging industry. The watershed-scale optimized and rearranged landscape design (WORLD) modelmore » estimates land allocations for different cellulosic feedstocks at biorefinery scale without displacing current animal nutrition requirements. This model also incorporates a network of the aforementioned depots. An integrated life cycle assessment is then conducted over the unified system of optimized feedstock production, processing, and associated transport operations to evaluate net energy yields (NEYs) and environmental impacts.« less
Park, Hyo Seon; Shin, Yunah; Choi, Se Woon; Kim, Yousok
2013-01-01
In this study, a practical and integrative SHM system was developed and applied to a large-scale irregular building under construction, where many challenging issues exist. In the proposed sensor network, customized energy-efficient wireless sensing units (sensor nodes, repeater nodes, and master nodes) were employed and comprehensive communications from the sensor node to the remote monitoring server were conducted through wireless communications. The long-term (13-month) monitoring results recorded from a large number of sensors (75 vibrating wire strain gauges, 10 inclinometers, and three laser displacement sensors) indicated that the construction event exhibiting the largest influence on structural behavior was the removal of bents that were temporarily installed to support the free end of the cantilevered members during their construction. The safety of each member could be confirmed based on the quantitative evaluation of each response. Furthermore, it was also confirmed that the relation between these responses (i.e., deflection, strain, and inclination) can provide information about the global behavior of structures induced from specific events. Analysis of the measurement results demonstrates the proposed sensor network system is capable of automatic and real-time monitoring and can be applied and utilized for both the safety evaluation and precise implementation of buildings under construction. PMID:23860317
Novel Directional Protection Scheme for the FREEDM Smart Grid System
NASA Astrophysics Data System (ADS)
Sharma, Nitish
This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.
Large-scale thermal energy storage using sodium hydroxide /NaOH/
NASA Technical Reports Server (NTRS)
Turner, R. H.; Truscello, V. C.
1977-01-01
A technique employing NaOH phase change material for large-scale thermal energy storage to 900 F (482 C) is described; the concept consists of 12-foot diameter by 60-foot long cylindrical steel shell with closely spaced internal tubes similar to a shell and tube heat exchanger. The NaOH heat storage medium fills the space between the tubes and outer shell. To charge the system, superheated steam flowing through the tubes melts and raises the temperature of NaOH; for discharge, pressurized water flows through the same tube bundle. A technique for system design and cost estimation is shown. General technical and economic properties of the storage unit integrated into a solar power plant are discussed.
NASA Astrophysics Data System (ADS)
Alberts, Samantha J.
The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.
Freeman, Mary C.; Pringle, C.M.; Jackson, C.R.
2007-01-01
Cumulatively, headwater streams contribute to maintaining hydrologic connectivity and ecosystem integrity at regional scales. Hydrologic connectivity is the water-mediated transport of matter, energy and organisms within or between elements of the hydrologic cycle. Headwater streams compose over two-thirds of total stream length in a typical river drainage and directly connect the upland and riparian landscape to the rest of the stream ecosystem. Altering headwater streams, e.g., by channelization, diversion through pipes, impoundment and burial, modifies fluxes between uplands and downstream river segments and eliminates distinctive habitats. The large-scale ecological effects of altering headwaters are amplified by land uses that alter runoff and nutrient loads to streams, and by widespread dam construction on larger rivers (which frequently leaves free-flowing upstream portions of river systems essential to sustaining aquatic biodiversity). We discuss three examples of large-scale consequences of cumulative headwater alteration. Downstream eutrophication and coastal hypoxia result, in part, from agricultural practices that alter headwaters and wetlands while increasing nutrient runoff. Extensive headwater alteration is also expected to lower secondary productivity of river systems by reducing stream-system length and trophic subsidies to downstream river segments, affecting aquatic communities and terrestrial wildlife that utilize aquatic resources. Reduced viability of freshwater biota may occur with cumulative headwater alteration, including for species that occupy a range of stream sizes but for which headwater streams diversify the network of interconnected populations or enhance survival for particular life stages. Developing a more predictive understanding of ecological patterns that may emerge on regional scales as a result of headwater alterations will require studies focused on components and pathways that connect headwaters to river, coastal and terrestrial ecosystems. Linkages between headwaters and downstream ecosystems cannot be discounted when addressing large-scale issues such as hypoxia in the Gulf of Mexico and global losses of biodiversity.
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Computational singular perturbation analysis of stochastic chemical systems with stiffness
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; ...
2017-01-25
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to notmore » only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. Furthermore, the algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.« less
NASA Technical Reports Server (NTRS)
Fisher, Scott S.
1986-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
Large-Scale Advanced Prop-Fan (LAP) pitch change actuator and control design report
NASA Technical Reports Server (NTRS)
Schwartz, R. A.; Carvalho, P.; Cutler, M. J.
1986-01-01
In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that the high inherent efficiency previously demonstrated by low speed turboprop propulsion systems may now be extended to today's higher speed aircraft if advanced high-speed propeller blades having thin airfoils and aerodynamic sweep are utilized. Hamilton Standard has designed a 9-foot diameter single-rotation Large-Scale Advanced Prop-Fan (LAP) which will be tested on a static test stand, in a high speed wind tunnel and on a research aircraft. The major objective of this testing is to establish the structural integrity of large-scale Prop-Fans of advanced construction in addition to the evaluation of aerodynamic performance and aeroacoustic design. This report describes the operation, design features and actual hardware of the (LAP) Prop-Fan pitch control system. The pitch control system which controls blade angle and propeller speed consists of two separate assemblies. The first is the control unit which provides the hydraulic supply, speed governing and feather function for the system. The second unit is the hydro-mechanical pitch change actuator which directly changes blade angle (pitch) as scheduled by the control.
RoboPIV: how robotics enable PIV on a large industrial scale
NASA Astrophysics Data System (ADS)
Michaux, F.; Mattern, P.; Kallweit, S.
2018-07-01
This work demonstrates how the interaction between particle image velocimetry (PIV) and robotics can massively increase measurement efficiency. The interdisciplinary approach is shown using the complex example of an automated, large scale, industrial environment: a typical automotive wind tunnel application. Both the high degree of flexibility in choosing the measurement region and the complete automation of stereo PIV measurements are presented. The setup consists of a combination of three robots, individually used as a 6D traversing unit for the laser illumination system as well as for each of the two cameras. Synchronised movements in the same reference frame are realised through a master-slave setup with a single interface to the user. By integrating the interface into the standard wind tunnel management system, a single measurement plane or a predefined sequence of several planes can be requested through a single trigger event, providing the resulting vector fields within minutes. In this paper, a brief overview on the demands of large scale industrial PIV and the existing solutions is given. Afterwards, the concept of RoboPIV is introduced as a new approach. In a first step, the usability of a selection of commercially available robot arms is analysed. The challenges of pose uncertainty and importance of absolute accuracy are demonstrated through comparative measurements, explaining the individual pros and cons of the analysed systems. Subsequently, the advantage of integrating RoboPIV directly into the existing wind tunnel management system is shown on basis of a typical measurement sequence. In a final step, a practical measurement procedure, including post-processing, is given by using real data and results. Ultimately, the benefits of high automation are demonstrated, leading to a drastic reduction in necessary measurement time compared to non-automated systems, thus massively increasing the efficiency of PIV measurements.
A Rich Metadata Filesystem for Scientific Data
ERIC Educational Resources Information Center
Bui, Hoang
2012-01-01
As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…
Computers in Electrical Engineering Education at Virginia Polytechnic Institute.
ERIC Educational Resources Information Center
Bennett, A. Wayne
1982-01-01
Discusses use of computers in Electrical Engineering (EE) at Virginia Polytechnic Institute. Topics include: departmental background, level of computing power using large scale systems, mini and microcomputers, use of digital logic trainers and analog/hybrid computers, comments on integrating computers into EE curricula, and computer use in…
Implementing Technology: A Change Process
ERIC Educational Resources Information Center
Atwell, Nedra; Maxwell, Marge; Romero, Elizabeth
2008-01-01
The state of Kentucky has embarked upon a large scale systems change effort to integrate Universal Design for Learning (UDL) principles, including use of digital curriculum and computerized reading supports to improve overall student achievement. A major component of this initiative is the use of Read & Write Gold. As higher expectations are…
These technologies provide the basis for developing landscape compostion and pattern indicators as sensitive measures of large-scale environmental change and thus may provide an effective and economical method for evaluating watershed conition related to disturbance from human an...
Integrating legacy data to understand agroecosystem regional dynamics to catastrophic events
USDA-ARS?s Scientific Manuscript database
Multi-year extreme drought events are part of the history of the Earth system. Legacy data on the climate drivers, geomorphic features, and agroecosystem responses across a dynamically changing landscape throughout a region can provide important insights to a future where large-scale catastrophic ev...
ERIC Educational Resources Information Center
Pavlu, Virgil
2008-01-01
Today, search engines are embedded into all aspects of digital world: in addition to Internet search, all operating systems have integrated search engines that respond even as you type, even over the network, even on cell phones; therefore the importance of their efficacy and efficiency cannot be overstated. There are many open possibilities for…
Development Of Autonomous Systems
NASA Astrophysics Data System (ADS)
Kanade, Takeo
1989-03-01
In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.
A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex
Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing
2015-01-01
We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-making and working memory). The model displays multiple temporal hierarchies, as evidenced by contrasting responses to visual versus somatosensory stimulation. Moreover, slower prefrontal and temporal areas have a disproportionate impact on global brain dynamics. These findings establish a circuit mechanism for “temporal receptive windows” that are progressively enlarged along the cortical hierarchy, suggest an extension of time integration in decision-making from local to large circuits, and should prompt a re-evaluation of the analysis of functional connectivity (measured by fMRI or EEG/MEG) by taking into account inter-areal heterogeneity. PMID:26439530
de la Torre, Andrea; Metivier, Aisha; Chu, Frances; ...
2015-11-25
Methane-utilizing bacteria (methanotrophs) are capable of growth on methane and are attractive systems for bio-catalysis. However, the application of natural methanotrophic strains to large-scale production of value-added chemicals/biofuels requires a number of physiological and genetic alterations. An accurate metabolic model coupled with flux balance analysis can provide a solid interpretative framework for experimental data analyses and integration.
Unlocking Flexibility: Energy Systems Integration [Guest Editorial
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Mark; Kroposki, Benjamin
2017-01-01
The articles in this special section focus on energy systems integration (ESI). Electric power systems around the world are experiencing great changes, including the retirement of coal and nuclear plants along with a rapid increase in the use of natural gas turbines and variable renewable technologies such as wind and solar. There is also much more use of information and communications technologies to enhance the visibility and controllability of the grid. Flexibility of operation, the ability of a power system to respond to change in demand and supply, is critical to enable higher levels of variable generation. One way tomore » unlock this potential flexibility is to tap into other energy domains. This concept of interconnecting energy domains is called ESI. ESI is the process of coordinating the operation and planning of energy systems across multiple pathways and/or geographical scales to deliver reliable, cost-effective energy services with minimal impact on the environment. Integrating energy domains adds flexibility to the electrical power system. ESI includes interactions among energy vectors and with other large-scale infrastructures including water, transport, and data and communications networks, which are an enabling technology for ESI.« less
Linking crop yield anomalies to large-scale atmospheric circulation in Europe.
Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J
2017-06-15
Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.
Delvigne, Frank; Takors, Ralf; Mudde, Rob; van Gulik, Walter; Noorman, Henk
2017-09-01
Efficient optimization of microbial processes is a critical issue for achieving a number of sustainable development goals, considering the impact of microbial biotechnology in agrofood, environment, biopharmaceutical and chemical industries. Many of these applications require scale-up after proof of concept. However, the behaviour of microbial systems remains unpredictable (at least partially) when shifting from laboratory-scale to industrial conditions. The need for robust microbial systems is thus highly needed in this context, as well as a better understanding of the interactions between fluid mechanics and cell physiology. For that purpose, a full scale-up/down computational framework is already available. This framework links computational fluid dynamics (CFD), metabolic flux analysis and agent-based modelling (ABM) for a better understanding of the cell lifelines in a heterogeneous environment. Ultimately, this framework can be used for the design of scale-down simulators and/or metabolically engineered cells able to cope with environmental fluctuations typically found in large-scale bioreactors. However, this framework still needs some refinements, such as a better integration of gas-liquid flows in CFD, and taking into account intrinsic biological noise in ABM. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts
NASA Astrophysics Data System (ADS)
Ravi, S.; Lobell, D. B.; Field, C. B.
2012-12-01
Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.
NASA Astrophysics Data System (ADS)
Widyaningrum, E.; Gorte, B. G. H.
2017-05-01
LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.
Lightweight computational steering of very large scale molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beazley, D.M.; Lomdahl, P.S.
1996-09-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less
A fast low-power optical memory based on coupled micro-ring lasers
NASA Astrophysics Data System (ADS)
Hill, Martin T.; Dorren, Harmen J. S.; de Vries, Tjibbe; Leijtens, Xaveer J. M.; den Besten, Jan Hendrik; Smalbrugge, Barry; Oei, Yok-Siang; Binsma, Hans; Khoe, Giok-Djan; Smit, Meint K.
2004-11-01
The increasing speed of fibre-optic-based telecommunications has focused attention on high-speed optical processing of digital information. Complex optical processing requires a high-density, high-speed, low-power optical memory that can be integrated with planar semiconductor technology for buffering of decisions and telecommunication data. Recently, ring lasers with extremely small size and low operating power have been made, and we demonstrate here a memory element constructed by interconnecting these microscopic lasers. Our device occupies an area of 18 × 40µm2 on an InP/InGaAsP photonic integrated circuit, and switches within 20ps with 5.5fJ optical switching energy. Simulations show that the element has the potential for much smaller dimensions and switching times. Large numbers of such memory elements can be densely integrated and interconnected on a photonic integrated circuit: fast digital optical information processing systems employing large-scale integration should now be viable.
Path-integral Monte Carlo method for Rényi entanglement entropies.
Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A
2014-07-01
We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.
NASA Astrophysics Data System (ADS)
Terminanto, A.; Swantoro, H. A.; Hidayanto, A. N.
2017-12-01
Enterprise Resource Planning (ERP) is an integrated information system to manage business processes of companies of various business scales. Because of the high cost of ERP investment, ERP implementation is usually done in large-scale enterprises, Due to the complexity of implementation problems, the success rate of ERP implementation is still low. Open Source System ERP becomes an alternative choice of ERP application to SME companies in terms of cost and customization. This study aims to identify characteristics and configure the implementation of OSS ERP Payroll module in KKPS (Employee Cooperative PT SRI) using OSS ERP Odoo and using ASAP method. This study is classified into case study research and action research. Implementation of OSS ERP Payroll module is done because the HR section of KKPS has not been integrated with other parts. The results of this study are the characteristics and configuration of OSS ERP payroll module in KKPS.
Integration experiences and performance studies of A COTS parallel archive systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Bary
2010-01-01
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less
Integration experiments and performance studies of a COTS parallel archive system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Gary
2010-06-16
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less
Beyond Widgets -- Systems Incentive Programs for Utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regnier, Cindy; Mathew, Paul; Robinson, Alastair
Utility incentive programs remain one of the most significant means of deploying commercialized, but underutilized building technologies to scale. However, these programs have been largely limited to component-based products (e.g., lamps, RTUs). While some utilities do provide ‘custom’ incentive programs with whole building and system level technical assistance, these programs require deeper levels of analysis, resulting in higher program costs. This results in custom programs being restricted to utilities with greater resources, and are typically applied mainly to large or energy-intensive facilities, leaving much of the market without cost effective access and incentives for these solutions. In addition, with increasinglymore » stringent energy codes, cost effective component-based solutions that achieve significant savings are dwindling. Building systems (e.g., integrated façade, HVAC and/or lighting solutions) can deliver higher savings that translate into large sector-wide savings if deployed at the scale of these programs. However, systems application poses a number of challenges – baseline energy use must be defined and measured; the metrics for energy and performance must be defined and tested against; in addition, system savings must be validated under well understood conditions. This paper presents a sample of findings of a project to develop validated utility incentive program packages for three specific integrated building systems, in collaboration with Xcel Energy (CO, MN), ComEd, and a consortium of California Public Owned Utilities (CA POUs) (Northern California Power Agency(NCPA) and the Southern California Public Power Authority(SCPPA)). Furthermore, these program packages consist of system specifications, system performance, M&V protocols, streamlined assessment methods, market assessment and implementation guidance.« less
Structural design of the Large Deployable Reflector (LDR)
NASA Technical Reports Server (NTRS)
Satter, Celeste M.; Lou, Michael C.
1991-01-01
An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.
Structural design of the Large Deployable Reflector (LDR)
NASA Astrophysics Data System (ADS)
Satter, Celeste M.; Lou, Michael C.
1991-09-01
An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.
Supporting Knowledge Transfer in IS Deployment Projects
NASA Astrophysics Data System (ADS)
Schönström, Mikael
To deploy new information systems is an expensive and complex task, and does seldom result in successful usage where the system adds strategic value to the firm (e.g. Sharma et al. 2003). It has been argued that innovation diffusion is a knowledge integration problem (Newell et al. 2000). Knowledge about business processes, deployment processes, information systems and technology are needed in a large-scale deployment of a corporate IS. These deployments can therefore to a large extent be argued to be a knowledge management (KM) problem. An effective deployment requires that knowledge about the system is effectively transferred to the target organization (Ko et al. 2005).
LASSIM-A network inference toolbox for genome-wide mechanistic modeling.
Magnusson, Rasmus; Mariotti, Guido Pio; Köpsén, Mattias; Lövfors, William; Gawel, Danuta R; Jörnsten, Rebecka; Linde, Jörg; Nordling, Torbjörn E M; Nyman, Elin; Schulze, Sylvie; Nestor, Colm E; Zhang, Huan; Cedersund, Gunnar; Benson, Mikael; Tjärnberg, Andreas; Gustafsson, Mika
2017-06-01
Recent technological advancements have made time-resolved, quantitative, multi-omics data available for many model systems, which could be integrated for systems pharmacokinetic use. Here, we present large-scale simulation modeling (LASSIM), which is a novel mathematical tool for performing large-scale inference using mechanistically defined ordinary differential equations (ODE) for gene regulatory networks (GRNs). LASSIM integrates structural knowledge about regulatory interactions and non-linear equations with multiple steady state and dynamic response expression datasets. The rationale behind LASSIM is that biological GRNs can be simplified using a limited subset of core genes that are assumed to regulate all other gene transcription events in the network. The LASSIM method is implemented as a general-purpose toolbox using the PyGMO Python package to make the most of multicore computers and high performance clusters, and is available at https://gitlab.com/Gustafsson-lab/lassim. As a method, LASSIM works in two steps, where it first infers a non-linear ODE system of the pre-specified core gene expression. Second, LASSIM in parallel optimizes the parameters that model the regulation of peripheral genes by core system genes. We showed the usefulness of this method by applying LASSIM to infer a large-scale non-linear model of naïve Th2 cell differentiation, made possible by integrating Th2 specific bindings, time-series together with six public and six novel siRNA-mediated knock-down experiments. ChIP-seq showed significant overlap for all tested transcription factors. Next, we performed novel time-series measurements of total T-cells during differentiation towards Th2 and verified that our LASSIM model could monitor those data significantly better than comparable models that used the same Th2 bindings. In summary, the LASSIM toolbox opens the door to a new type of model-based data analysis that combines the strengths of reliable mechanistic models with truly systems-level data. We demonstrate the power of this approach by inferring a mechanistically motivated, genome-wide model of the Th2 transcription regulatory system, which plays an important role in several immune related diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richter, Tim; Slezak, Lee; Johnson, Chris
2008-12-31
The objective of this project is to reduce the fuel consumption of off-highway vehicles, specifically large tonnage mine haul trucks. A hybrid energy storage and management system will be added to a conventional diesel-electric truck that will allow capture of braking energy normally dissipated in grid resistors as heat. The captured energy will be used during acceleration and motoring, reducing the diesel engine load, thus conserving fuel. The project will work towards a system validation of the hybrid system by first selecting an energy storage subsystem and energy management subsystem. Laboratory testing at a subscale level will evaluate these selectionsmore » and then a full-scale laboratory test will be performed. After the subsystems have been proven at the full-scale lab, equipment will be mounted on a mine haul truck and integrated with the vehicle systems. The integrated hybrid components will be exercised to show functionality, capability, and fuel economy impacts in a mine setting.« less
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
Concepts for a global resources information system
NASA Technical Reports Server (NTRS)
Billingsley, F. C.; Urena, J. L.
1984-01-01
The objective of the Global Resources Information System (GRIS) is to establish an effective and efficient information management system to meet the data access requirements of NASA and NASA-related scientists conducting large-scale, multi-disciplinary, multi-mission scientific investigations. Using standard interfaces and operating guidelines, diverse data systems can be integrated to provide the capabilities to access and process multiple geographically dispersed data sets and to develop the necessary procedures and algorithms to derive global resource information.
Integrating scales of seagrass monitoring to meet conservation needs
Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.
2012-01-01
We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
OUT Success Stories: Solar Hot Water Technology
DOE R&D Accomplishments Database
Clyne, R.
2000-08-01
Solar hot water technology was made great strides in the past two decades. Every home, commercial building, and industrial facility requires hot water. DOE has helped to develop reliable and durable solar hot water systems. For industrial applications, the growth potential lies in large-scale systems, using flat-plate and trough-type collectors. Flat-plate collectors are commonly used in residential hot water systems and can be integrated into the architectural design of the building.
Towards the computation of time-periodic inertial range dynamics
NASA Astrophysics Data System (ADS)
van Veen, L.; Vela-Martín, A.; Kawahara, G.
2018-04-01
We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
Consortium biology in immunology: the perspective from the Immunological Genome Project.
Benoist, Christophe; Lanier, Lewis; Merad, Miriam; Mathis, Diane
2012-10-01
Although the field has a long collaborative tradition, immunology has made less use than genetics of 'consortium biology', wherein groups of investigators together tackle large integrated questions or problems. However, immunology is naturally suited to large-scale integrative and systems-level approaches, owing to the multicellular and adaptive nature of the cells it encompasses. Here, we discuss the value and drawbacks of this organization of research, in the context of the long-running 'big science' debate, and consider the opportunities that may exist for the immunology community. We position this analysis in light of our own experience, both positive and negative, as participants of the Immunological Genome Project.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
NASA Technical Reports Server (NTRS)
Van Vonno, N. W.
1972-01-01
Development of an alternate approach to the conventional methods of reliability assurance for large-scale integrated circuits. The product treated is a large-scale T squared L array designed for space applications. The concept used is that of qualification of product by evaluation of the basic processing used in fabricating the product, providing an insight into its potential reliability. Test vehicles are described which enable evaluation of device characteristics, surface condition, and various parameters of the two-level metallization system used. Evaluation of these test vehicles is performed on a lot qualification basis, with the lot consisting of one wafer. Assembled test vehicles are evaluated by high temperature stress at 300 C for short time durations. Stressing at these temperatures provides a rapid method of evaluation and permits a go/no go decision to be made on the wafer lot in a timely fashion.
Advances in multi-scale modeling of solidification and casting processes
NASA Astrophysics Data System (ADS)
Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang
2011-04-01
The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.
Virtual workstation - A multimodal, stereoscopic display environment
NASA Astrophysics Data System (ADS)
Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.
1987-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.
Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas
2017-01-01
Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.
Auxiliary basis expansions for large-scale electronic structure calculations
Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin
2005-01-01
One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
An Integrated Scale for Measuring an Organizational Learning System
ERIC Educational Resources Information Center
Jyothibabu, C.; Farooq, Ayesha; Pradhan, Bibhuti Bhusan
2010-01-01
Purpose: The purpose of this paper is to develop an integrated measurement scale for an organizational learning system by capturing the learning enablers, learning results and performance outcome in an organization. Design/methodology/approach: A new measurement scale was developed by integrating and modifying two existing scales, identified…
Knowledge-based reusable software synthesis system
NASA Technical Reports Server (NTRS)
Donaldson, Cammie
1989-01-01
The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.
Chip-scale sensor system integration for portable health monitoring.
Jokerst, Nan M; Brooke, Martin A; Cho, Sang-Yeon; Shang, Allan B
2007-12-01
The revolution in integrated circuits over the past 50 yr has produced inexpensive computing and communications systems that are powerful and portable. The technologies for these integrated chip-scale sensing systems, which will be miniature, lightweight, and portable, are emerging with the integration of sensors with electronics, optical systems, micromachines, microfluidics, and the integration of chemical and biological materials (soft/wet material integration with traditional dry/hard semiconductor materials). Hence, we stand at a threshold for health monitoring technology that promises to provide wearable biochemical sensing systems that are comfortable, inauspicious, wireless, and battery-operated, yet that continuously monitor health status, and can transmit compressed data signals at regular intervals, or alarm conditions immediately. In this paper, we explore recent results in chip-scale sensor integration technology for health monitoring. The development of inexpensive chip-scale biochemical optical sensors, such as microresonators, that are customizable for high sensitivity coupled with rapid prototyping will be discussed. Ground-breaking work in the integration of chip-scale optical systems to support these optical sensors will be highlighted, and the development of inexpensive Si complementary metal-oxide semiconductor circuitry (which makes up the vast majority of computational systems today) for signal processing and wireless communication with local receivers that lie directly on the chip-scale sensor head itself will be examined.
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Debugging and Analysis of Large-Scale Parallel Programs
1989-09-01
Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution
Western Wind and Solar Integration Study Phase 3A: Low Levels of Synchronous Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Nicholas W.; Leonardi, Bruno; D'Aquila, Robert
The stability of the North American electric power grids under conditions of high penetrations of wind and solar is a significant concern and possible impediment to reaching renewable energy goals. The 33% wind and solar annual energy penetration considered in this study results in substantial changes to the characteristics of the bulk power system. This includes different power flow patterns, different commitment and dispatch of existing synchronous generation, and different dynamic behavior from wind and solar generation. The Western Wind and Solar Integration Study (WWSIS), sponsored by the U.S. Department of Energy, is one of the largest regional solar andmore » wind integration studies to date. In multiple phases, it has explored different aspects of the question: Can we integrate large amounts of wind and solar energy into the electric power system of the West? The work reported here focused on the impact of low levels of synchronous generation on the transient stability performance in one part of the region in which wind generation has displaced synchronous thermal generation under highly stressed, weak system conditions. It is essentially an extension of WWSIS-3. Transient stability, the ability of the power system to maintain synchronism among all elements following disturbances, is a major constraint on operations in many grids, including the western U.S. and Texas systems. These constraints primarily concern the performance of the large-scale bulk power system. But grid-wide stability concerns with high penetrations of wind and solar are still not thoroughly understood. This work focuses on 'traditional' fundamental frequency stability issues, such as maintaining synchronism, frequency, and voltage. The objectives of this study are to better understand the implications of low levels of synchronous generation and a weak grid on overall system performance by: 1) Investigating the Western Interconnection under conditions of both high renewable generation (e.g., wind and solar) and low synchronous generation (e.g., significant coal power plant decommitment or retirement); and 2) Analyzing both the large-scale stability of the Western Interconnection and regional stability issues driven by more geographically dispersed renewable generation interacting with a transmission grid that evolved with large, central station plants at key nodes. As noted above, the work reported here is an extension of the research performed in WWSIS-3.« less
Sale, Martin V.; Lord, Anton; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B.
2015-01-01
Normal brain function depends on a dynamic balance between local specialization and large-scale integration. It remains unclear, however, how local changes in functionally specialized areas can influence integrated activity across larger brain networks. By combining transcranial magnetic stimulation with resting-state functional magnetic resonance imaging, we tested for changes in large-scale integration following the application of excitatory or inhibitory stimulation on the human motor cortex. After local inhibitory stimulation, regions encompassing the sensorimotor module concurrently increased their internal integration and decreased their communication with other modules of the brain. There were no such changes in modular dynamics following excitatory stimulation of the same area of motor cortex nor were there changes in the configuration and interactions between core brain hubs after excitatory or inhibitory stimulation of the same area. These results suggest the existence of selective mechanisms that integrate local changes in neural activity, while preserving ongoing communication between brain hubs. PMID:25717162
ERIC Educational Resources Information Center
Najm, Majdi R. Abou; Mohtar, Rabi H.; Cherkauer, Keith A.; French, Brian F.
2010-01-01
Proper understanding of scaling and large-scale hydrologic processes is often not explicitly incorporated in the teaching curriculum. This makes it difficult for students to connect the effect of small scale processes and properties (like soil texture and structure, aggregation, shrinkage, and cracking) on large scale hydrologic responses (like…
NASA Astrophysics Data System (ADS)
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
Fabrication of the HIAD Large-Scale Demonstration Assembly and Upcoming Mission Applications
NASA Technical Reports Server (NTRS)
Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; Dinonno, J. M.; Cheatwood, F M.
2017-01-01
Over a decade of work has been conducted in the development of NASAs Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale.In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then integrated in early 2017. The design includes provisions to add the remaining four tori necessary to complete the assembly of the 12m Human-Scale Pathfinder HIAD in the event future project funding becomes available.This presentation will discuss the HIAD large-scale demonstration assembly design and fabrication per-formed in the last year including the precursor tori development and the partial-stack fabrication. Potential near-term and future 10-15m HIAD applications will also be discussed.
Fabrication of the HIAD Large-Scale Demonstration Assembly
NASA Technical Reports Server (NTRS)
Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; DiNonno, J. M.; Cheatwood, F. M.
2017-01-01
Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale. In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then integrated in early 2017. The design includes provisions to add the remaining four tori necessary to complete the assembly of the 12m Human-Scale Pathfinder HIAD in the event future project funding becomes available.This presentation will discuss the HIAD large-scale demonstration assembly design and fabrication per-formed in the last year including the precursor tori development and the partial-stack fabrication. Potential near-term and future 10-15m HIAD applications will also be discussed.
ERIC Educational Resources Information Center
Richardson, Jayson W.; Sales, Gregory; Sentocnik, Sonja
2015-01-01
Integrating ICTs into international development projects is common. However, focusing on how ICTs support leading, teaching, and learning is often overlooked. This article describes a team's approach to technology integration into the design of a large-scale, five year, teacher and leader professional development project in the country of Georgia.…
On-chip synthesis of circularly polarized emission of light with integrated photonic circuits.
He, Li; Li, Mo
2014-05-01
The helicity of circularly polarized (CP) light plays an important role in the light-matter interaction in magnetic and quantum material systems. Exploiting CP light in integrated photonic circuits could lead to on-chip integration of novel optical helicity-dependent devices for applications ranging from spintronics to quantum optics. In this Letter, we demonstrate a silicon photonic circuit coupled with a 2D grating emitter operating at a telecom wavelength to synthesize vertically emitting, CP light from a quasi-TE waveguide mode. Handedness of the emitted circular polarized light can be thermally controlled with an integrated microheater. The compact device footprint enables a small beam diameter, which is desirable for large-scale integration.
NASA Astrophysics Data System (ADS)
Castro, Ayoze; Memè, Simone; Quevedo, Eduardo; Waldmann, Christoph; Pearlman, Jay; Delory, Eric; Llinás, Octavio
2017-04-01
NeXOS is a cross-functional and multidisciplinary project funded under the EU FP7 Program, which involves 21 organizations from six different European countries. They all have different backgrounds, interests, business models and perspectives. To be successful, NeXOS applied an international recognized management methodology tailored to the specific project's environment and conditions, with an explicit structure based on defined roles and responsibilities for the people involved in the project and a means for effective communication between them (Fig.1). The project, divided in four different stages of requirements, design, integration, validation and demonstration, allows a clearer monitor of its progress, a comparison of the level of achievement in accordance with the plan and an earlier detection of problems/issues, leading to implementation of less disruptive, but still effective corrective actions. NeXOS is following an ambitious plan to develop innovative sensor systems with a high degree of modularity and interoperability, starting with requirements definition through validation and demonstration phase. To make this integrative approach possible, a management development strategy has been used incorporating systems engineering methods (Fig.2). Although this is standard practice in software development and large scale systems such as aircraft production, it is still new in the ocean hardware business and therefore NeXOS was a test case for this development concept. The question is one of scale as ocean observation systems are typically built on the scale of a few with co-located teams. With a system of diverse technologies (optical, acoustic, platform interfaces), there are cultural differences that must be bridged. The greatest challenge is in the implementation and the willingness of different teams to work with an engineering process, which may help ultimate system integration, but may place additional burdens on individual participants. This presentation will address approaches for effective operations in this environment.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent
2015-03-01
Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area, and store these results in a web-based digital format.
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.
2014-12-01
The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.
Joint IRIS/PASSCAL UNAVCO Seismic and GPS Installations, Testing, and Development
NASA Astrophysics Data System (ADS)
Fowler, J.; Alvarez, M.; Beaudoin, B.; Jackson, M.; Feaux, K.; Ruud, O.; Andreatta, V.; Meertens, C.; Ingate, S.
2002-12-01
Future large-scale deformation initiatives such as EarthScope (http://www.earthscope.org/) will provide an opportunity for collocation and integration of GPS receivers and broadband and short period seismic instruments. Example integration targets include PBO backbone and cluster sites with USArray Transportable (Bigfoot) and Permanent Array. A GPS seismic integration and testing facility at the IRIS/PASSCAL Instrument Center in Socorro, NM is currently performing side-by-side testing of different seismometers, GPS receivers, communications hardware, power systems and data streaming software. One configuration tested uses an integrated VSAT data communications system and a broadband seismometer collocated with a geodetic quality GPS system. Data are routed through a VSAT hub and distributed to the UNAVCO Data Archive in Boulder and the IRIS Data Management Center in Seattle. Preliminary results indicate data availability approaching 100% with a maximum latency of 5 sec.
Mansoori, Bahar; Erhard, Karen K; Sunshine, Jeffrey L
2012-02-01
The availability of the Picture Archiving and Communication System (PACS) has revolutionized the practice of radiology in the past two decades and has shown to eventually increase productivity in radiology and medicine. PACS implementation and integration may bring along numerous unexpected issues, particularly in a large-scale enterprise. To achieve a successful PACS implementation, identifying the critical success and failure factors is essential. This article provides an overview of the process of implementing and integrating PACS in a comprehensive health system comprising an academic core hospital and numerous community hospitals. Important issues are addressed, touching all stages from planning to operation and training. The impact of an enterprise-wide radiology information system and PACS at the academic medical center (four specialty hospitals), in six additional community hospitals, and in all associated outpatient clinics as well as the implications on the productivity and efficiency of the entire enterprise are presented. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Assessment of the Study of Army Logistics 1981. Volume II. Analysis of Recommendations.
1983-02-01
conceived. This third generation equipment, because of its size, cost and processing characteristics, demands large scale integrated processing with a... generated by DS4. Three systems changes to SAILS ABX have been implemented which reduce the volume of supply status provided to the DS4 system. 15... generated by the wholesale system by 50 percent or nearly 1,000,000 transactions per month. Additional reductions will be generated by selected status
NASA Astrophysics Data System (ADS)
Tijerina, D.; Gochis, D.; Condon, L. E.; Maxwell, R. M.
2017-12-01
Development of integrated hydrology modeling systems that couple atmospheric, land surface, and subsurface flow is growing trend in hydrologic modeling. Using an integrated modeling framework, subsurface hydrologic processes, such as lateral flow and soil moisture redistribution, are represented in a single cohesive framework with surface processes like overland flow and evapotranspiration. There is a need for these more intricate models in comprehensive hydrologic forecasting and water management over large spatial areas, specifically the Continental US (CONUS). Currently, two high-resolution, coupled hydrologic modeling applications have been developed for this domain: CONUS-ParFlow built using the integrated hydrologic model ParFlow and the National Water Model that uses the NCAR Weather Research and Forecasting hydrological extension package (WRF-Hydro). Both ParFlow and WRF-Hydro include land surface models, overland flow, and take advantage of parallelization and high-performance computing (HPC) capabilities; however, they have different approaches to overland subsurface flow and groundwater-surface water interactions. Accurately representing large domains remains a challenge considering the difficult task of representing complex hydrologic processes, computational expense, and extensive data needs; both models have accomplished this, but have differences in approach and continue to be difficult to validate. A further exploration of effective methodology to accurately represent large-scale hydrology with integrated models is needed to advance this growing field. Here we compare the outputs of CONUS-ParFlow and the National Water Model to each other and with observations to study the performance of hyper-resolution models over large domains. Models were compared over a range of scales for major watersheds within the CONUS with a specific focus on the Mississippi, Ohio, and Colorado River basins. We use a novel set of approaches and analysis for this comparison to better understand differences in process and bias. This intercomparison is a step toward better understanding how much water we have and interactions between surface and subsurface. Our goal is to advance our understanding and simulation of the hydrologic system and ultimately improve hydrologic forecasts.
Integrated Mid-Continent Carbon Capture, Sequestration & Enhanced Oil Recovery Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian McPherson
2010-08-31
A consortium of research partners led by the Southwest Regional Partnership on Carbon Sequestration and industry partners, including CAP CO2 LLC, Blue Source LLC, Coffeyville Resources, Nitrogen Fertilizers LLC, Ash Grove Cement Company, Kansas Ethanol LLC, Headwaters Clean Carbon Services, Black & Veatch, and Schlumberger Carbon Services, conducted a feasibility study of a large-scale CCS commercialization project that included large-scale CO{sub 2} sources. The overall objective of this project, entitled the 'Integrated Mid-Continent Carbon Capture, Sequestration and Enhanced Oil Recovery Project' was to design an integrated system of US mid-continent industrial CO{sub 2} sources with CO{sub 2} capture, and geologicmore » sequestration in deep saline formations and in oil field reservoirs with concomitant EOR. Findings of this project suggest that deep saline sequestration in the mid-continent region is not feasible without major financial incentives, such as tax credits or otherwise, that do not exist at this time. However, results of the analysis suggest that enhanced oil recovery with carbon sequestration is indeed feasible and practical for specific types of geologic settings in the Midwestern U.S.« less
Optical systems integrated modeling
NASA Technical Reports Server (NTRS)
Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck
1992-01-01
An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.
Chapter 1: Biomedical knowledge integration.
Payne, Philip R O
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems.
Chapter 1: Biomedical Knowledge Integration
Payne, Philip R. O.
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems. PMID:23300416
Computer-aided engineering of semiconductor integrated circuits
NASA Astrophysics Data System (ADS)
Meindl, J. D.; Dutton, R. W.; Gibbons, J. F.; Helms, C. R.; Plummer, J. D.; Tiller, W. A.; Ho, C. P.; Saraswat, K. C.; Deal, B. E.; Kamins, T. I.
1980-07-01
Economical procurement of small quantities of high performance custom integrated circuits for military systems is impeded by inadequate process, device and circuit models that handicap low cost computer aided design. The principal objective of this program is to formulate physical models of fabrication processes, devices and circuits to allow total computer-aided design of custom large-scale integrated circuits. The basic areas under investigation are (1) thermal oxidation, (2) ion implantation and diffusion, (3) chemical vapor deposition of silicon and refractory metal silicides, (4) device simulation and analytic measurements. This report discusses the fourth year of the program.
2014-09-30
repeating pulse-like signals were investigated. Software prototypes were developed and integrated into distinct streams of reseach ; projects...to study complex sound archives spanning large spatial and temporal scales. A new post processing method for detection and classifcation was also...false positive rates. HK-ANN was successfully tested for a large minke whale dataset, but could easily be used on other signal types. Various
2014-03-01
wind turbines from General Electric. China recognizes the issues with IPR but it is something that will take time to fix. It will be a significant...Large aircraft Large-scale oil and gas exploration Manned space, including lunar exploration Next-generation broadband wireless ...circuits, and building an innovation system for China’s integrated circuit (IC) manufacturing industry. 3. New generation broadband wireless mobile
Social-ecological resilience and geomorphic systems
NASA Astrophysics Data System (ADS)
Chaffin, Brian C.; Scown, Murray
2018-03-01
Governance of coupled social-ecological systems (SESs) and the underlying geomorphic processes that structure and alter Earth's surface is a key challenge for global sustainability amid the increasing uncertainty and change that defines the Anthropocene. Social-ecological resilience as a concept of scientific inquiry has contributed to new understandings of the dynamics of change in SESs, increasing our ability to contextualize and implement governance in these systems. Often, however, the importance of geomorphic change and geomorphological knowledge is somewhat missing from processes employed to inform SES governance. In this contribution, we argue that geomorphology and social-ecological resilience research should be integrated to improve governance toward sustainability. We first provide definitions of engineering, ecological, community, and social-ecological resilience and then explore the use of these concepts within and alongside geomorphology in the literature. While ecological studies often consider geomorphology as an important factor influencing the resilience of ecosystems and geomorphological studies often consider the engineering resilience of geomorphic systems of interest, very few studies define and employ a social-ecological resilience framing and explicitly link the concept to geomorphic systems. We present five key concepts-scale, feedbacks, state or regime, thresholds and regime shifts, and humans as part of the system-which we believe can help explicitly link important aspects of social-ecological resilience inquiry and geomorphological inquiry in order to strengthen the impact of both lines of research. Finally, we discuss how these five concepts might be used to integrate social-ecological resilience and geomorphology to better understand change in, and inform governance of, SESs. To compound these dynamics of resilience, complex systems are nested and cross-scale interactions from smaller and larger scales relative to the system of interest can play formative roles during periods of collapse and reorganization. Large- and small-scale disturbances as well as large-scale system memory/capacity and small-scale innovation can have significant impacts on the trajectory of a reorganizing system (Gunderson and Holling, 2002; Chaffin and Gunderson, 2016). Attempts to measure the property of ecological resilience across complex systems amounts to attempts to measure the persistence of system-controlling variables, including processes, parameters, and important feedbacks, when the system is exposed to varying degrees of disturbance (Folke, 2016).
Real-Time Large-Scale Dense Mapping with Surfels
Fu, Xingyin; Zhu, Feng; Wu, Qingxiao; Sun, Yunlei; Lu, Rongrong; Yang, Ruigang
2018-01-01
Real-time dense mapping systems have been developed since the birth of consumer RGB-D cameras. Currently, there are two commonly used models in dense mapping systems: truncated signed distance function (TSDF) and surfel. The state-of-the-art dense mapping systems usually work fine with small-sized regions. The generated dense surface may be unsatisfactory around the loop closures when the system tracking drift grows large. In addition, the efficiency of the system with surfel model slows down when the number of the model points in the map becomes large. In this paper, we propose to use two maps in the dense mapping system. The RGB-D images are integrated into a local surfel map. The old surfels that reconstructed in former times and far away from the camera frustum are moved from the local map to the global map. The updated surfels in the local map when every frame arrives are kept bounded. Therefore, in our system, the scene that can be reconstructed is very large, and the frame rate of our system remains high. We detect loop closures and optimize the pose graph to distribute system tracking drift. The positions and normals of the surfels in the map are also corrected using an embedded deformation graph so that they are consistent with the updated poses. In order to deal with large surface deformations, we propose a new method for constructing constraints with system trajectories and loop closure keyframes. The proposed new method stabilizes large-scale surface deformation. Experimental results show that our novel system behaves better than the prior state-of-the-art dense mapping systems. PMID:29747450
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
NASA Technical Reports Server (NTRS)
Brooks, Rodney Allen; Stein, Lynn Andrea
1994-01-01
We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We will build an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to 'think' by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience.
The Integrated Hazard Analysis Integrator
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2009-01-01
Hazard analysis addresses hazards that arise in the design, development, manufacturing, construction, facilities, transportation, operations and disposal activities associated with hardware, software, maintenance, operations and environments. An integrated hazard is an event or condition that is caused by or controlled by multiple systems, elements, or subsystems. Integrated hazard analysis (IHA) is especially daunting and ambitious for large, complex systems such as NASA s Constellation program which incorporates program, systems and element components that impact others (International Space Station, public, International Partners, etc.). An appropriate IHA should identify all hazards, causes, controls and verifications used to mitigate the risk of catastrophic loss of crew, vehicle and/or mission. Unfortunately, in the current age of increased technology dependence, there is the tendency to sometimes overlook the necessary and sufficient qualifications of the integrator, that is, the person/team that identifies the parts, analyzes the architectural structure, aligns the analysis with the program plan and then communicates/coordinates with large and small components, each contributing necessary hardware, software and/or information to prevent catastrophic loss. As viewed from both Challenger and Columbia accidents, lack of appropriate communication, management errors and lack of resources dedicated to safety were cited as major contributors to these fatalities. From the accident reports, it would appear that the organizational impact of managers, integrators and safety personnel contributes more significantly to mission success and mission failure than purely technological components. If this is so, then organizations who sincerely desire mission success must put as much effort in selecting managers and integrators as they do when designing the hardware, writing the software code and analyzing competitive proposals. This paper will discuss the necessary and sufficient requirements of one of the significant contributors to mission success, the IHA integrator. Discussions will be provided to describe both the mindset required as well as deleterious assumptions/behaviors to avoid when integrating within a large scale system.
Geometric quantification of features in large flow fields.
Kendall, Wesley; Huang, Jian; Peterka, Tom
2012-01-01
Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
Cortico-hippocampal systems involved in memory and cognition: the PMAT framework.
Ritchey, Maureen; Libby, Laura A; Ranganath, Charan
2015-01-01
In this chapter, we review evidence that the cortical pathways to the hippocampus appear to extend from two large-scale cortical systems: a posterior medial (PM) system that includes the parahippocampal cortex and retrosplenial cortex, and an anterior temporal (AT) system that includes the perirhinal cortex. This "PMAT" framework accounts for differences in the anatomical and functional connectivity of the medial temporal lobes, which may underpin differences in cognitive function between the systems. The PM and AT systems make distinct contributions to memory and to other cognitive domains, and convergent findings suggest that they are involved in processing information about contexts and items, respectively. In order to support the full complement of memory-guided behavior, the two systems must interact, and the hippocampal and ventromedial prefrontal cortex may serve as sites of integration between the two systems. We conclude that when considering the "connected hippocampus," inquiry should extend beyond the medial temporal lobes to include the large-scale cortical systems of which they are a part. © 2015 Elsevier B.V. All rights reserved.
Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay
2018-01-01
Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462
An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less
NASA Astrophysics Data System (ADS)
Suzuki, Ryosuke; Nishimura, Motoki; Yuan, Lee Chang; Kamahara, Hirotsugu; Atsuta, Yoichi; Daimon, Hiroyuki
2017-10-01
Utilization of sewage sludge using anaerobic digestion has been promoted for decades. However, it is still relatively uncommon especially in Japan. As an approach to promote the utilization of sewage sludge using anaerobic digestion, an integrated system that combines anaerobic digestion with greenhouse, composting and seaweed cultivation was proposed. Based on the concept of the integrated system, not only sewage sludge can be treated using anaerobic digestion that creates green energy, but also the by-products such as CO2 and heat produced during the process can be utilized for crops production. In this study, the potentials of such integrated system were discussed through the estimation of possible commercialized scale as well as comparison of energy consumption with conventional approach for sewage sludge treatment, which is the incineration. The estimation of possible commercialized scale was calculated based on the carbon flow of the system. Results showed that 25% of the current total electricity of the wastewater treatment plant can be covered by the energy produced using anaerobic digestion of sewage sludge. It was estimated that the total energy consumption of the integrated system was actually 14% lower when compared to incineration approach. In addition to the large amount of crops that can be produced, all in all this study aimed to be the showcase of the potentials of sewage sludge as a biomass by implementing the proposed integrated system. The extra values of producing crops through the utilization of CO2 and heat can serve as a stimulus to the public, which would surely lead to higher interest to implement the utilization of sewage sludge using anaerobic digestion.
Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies
NASA Astrophysics Data System (ADS)
Xie, S.; Zhang, Y.
2011-12-01
The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.
An Integrated Unix-based CAD System for the Design and Testing of Custom VLSI Chips
NASA Technical Reports Server (NTRS)
Deutsch, L. J.
1985-01-01
A computer aided design (CAD) system that is being used at the Jet Propulsion Laboratory for the design of custom and semicustom very large scale integrated (VLSI) chips is described. The system consists of a Digital Equipment Corporation VAX computer with the UNIX operating system and a collection of software tools for the layout, simulation, and verification of microcircuits. Most of these tools were written by the academic community and are, therefore, available to JPL at little or no cost. Some small pieces of software have been written in-house in order to make all the tools interact with each other with a minimal amount of effort on the part of the designer.
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Badilescu-Buga, Emil
2012-01-01
Learning Activity Management System (LAMS) has been trialled and used by users from many countries around the globe, but despite the positive attitude towards its potential benefits to pedagogical processes its adoption in practice has been uneven, reflecting how difficult it is to make a new technology based concept an integral part of the…
ERIC Educational Resources Information Center
McGann, Sean T.; Frost, Raymond D.; Matta, Vic; Huang, Wayne
2007-01-01
Information Systems (IS) departments are facing challenging times as enrollments decline and the field evolves, thus necessitating large-scale curriculum changes. Our experience shows that many IS departments are in such a predicament as they have not evolved content quickly enough to keep it relevant, they do a poor job coordinating curriculum…
A Circuit Extraction System and Graphical Display for VLSI (Very Large Scale Integrated) Design.
1989-12-01
understandable as a net-list. The file contains information on the different physical layers of a polysilicon chip, not how these layers combine to form...yperc; struct vwsurf vsurf =DEFAULT_VWSURF(pixwt-ndd); stt-uct vwsurf vsurf2 DEFAULT-VWSURF(pixwfLndd); ma in) another[ Ol =IV while (anothler[0O = ’y
NASA Astrophysics Data System (ADS)
Turner, Sean W. D.; Marlow, David; Ekström, Marie; Rhodes, Bruce G.; Kularathna, Udaya; Jeffrey, Paul J.
2014-04-01
Despite a decade of research into climate change impacts on water resources, the scientific community has delivered relatively few practical methodological developments for integrating uncertainty into water resources system design. This paper presents an application of the "decision scaling" methodology for assessing climate change impacts on water resources system performance and asks how such an approach might inform planning decisions. The decision scaling method reverses the conventional ethos of climate impact assessment by first establishing the climate conditions that would compel planners to intervene. Climate model projections are introduced at the end of the process to characterize climate risk in such a way that avoids the process of propagating those projections through hydrological models. Here we simulated 1000 multisite synthetic monthly streamflow traces in a model of the Melbourne bulk supply system to test the sensitivity of system performance to variations in streamflow statistics. An empirical relation was derived to convert decision-critical flow statistics to climatic units, against which 138 alternative climate projections were plotted and compared. We defined the decision threshold in terms of a system yield metric constrained by multiple performance criteria. Our approach allows for fast and simple incorporation of demand forecast uncertainty and demonstrates the reach of the decision scaling method through successful execution in a large and complex water resources system. Scope for wider application in urban water resources planning is discussed.
Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities.
Covington, Kelsie; McCreedy, Evan S; Chen, Min; Carass, Aaron; Aucoin, Nicole; Landman, Bennett A
2010-05-25
Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).
NASA Astrophysics Data System (ADS)
Wu, Bin; Zheng, Yi; Wu, Xin; Tian, Yong; Han, Feng; Liu, Jie; Zheng, Chunmiao
2015-04-01
Integrated surface water-groundwater modeling can provide a comprehensive and coherent understanding on basin-scale water cycle, but its high computational cost has impeded its application in real-world management. This study developed a new surrogate-based approach, SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), to incorporate the integrated modeling into water management optimization. Its applicability and advantages were evaluated and validated through an optimization research on the conjunctive use of surface water (SW) and groundwater (GW) for irrigation in a semiarid region in northwest China. GSFLOW, an integrated SW-GW model developed by USGS, was employed. The study results show that, due to the strong and complicated SW-GW interactions, basin-scale water saving could be achieved by spatially optimizing the ratios of groundwater use in different irrigation districts. The water-saving potential essentially stems from the reduction of nonbeneficial evapotranspiration from the aqueduct system and shallow groundwater, and its magnitude largely depends on both water management schemes and hydrological conditions. Important implications for water resources management in general include: first, environmental flow regulation needs to take into account interannual variation of hydrological conditions, as well as spatial complexity of SW-GW interactions; and second, to resolve water use conflicts between upper stream and lower stream, a system approach is highly desired to reflect ecological, economic, and social concerns in water management decisions. Overall, this study highlights that surrogate-based approaches like SOIM represent a promising solution to filling the gap between complex environmental modeling and real-world management decision-making.
Development of mpi_EPIC model for global agroecosystem modeling
Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...
2014-12-31
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
Mathieson, William; Guljar, Nafia; Sanchez, Ignacio; Sroya, Manveer; Thomas, Gerry A
2018-05-03
DNA extracted from formalin-fixed, paraffin-embedded (FFPE) tissue blocks is amenable to analytical techniques, including sequencing. DNA extraction protocols are typically long and complex, often involving an overnight proteinase K digest. Automated platforms that shorten and simplify the process are therefore an attractive proposition for users wanting a faster turn-around or to process large numbers of biospecimens. It is, however, unclear whether automated extraction systems return poorer DNA yields or quality than manual extractions performed by experienced technicians. We extracted DNA from 42 FFPE clinical tissue biospecimens using the QiaCube (Qiagen) and ExScale (ExScale Biospecimen Solutions) automated platforms, comparing DNA yields and integrities with those from manual extractions. The QIAamp DNA FFPE Spin Column Kit was used for manual and QiaCube DNA extractions and the ExScale extractions were performed using two of the manufacturer's magnetic bead kits: one extracting DNA only and the other simultaneously extracting DNA and RNA. In all automated extraction methods, DNA yields and integrities (assayed using DNA Integrity Numbers from a 4200 TapeStation and the qPCR-based Illumina FFPE QC Assay) were poorer than in the manual method, with the QiaCube system performing better than the ExScale system. However, ExScale was fastest, offered the highest reproducibility when extracting DNA only, and required the least intervention or technician experience. Thus, the extraction methods have different strengths and weaknesses, would appeal to different users with different requirements, and therefore, we cannot recommend one method over another.
Discrete elements for 3D microfluidics.
Bhargava, Krisna C; Thompson, Bryant; Malmstadt, Noah
2014-10-21
Microfluidic systems are rapidly becoming commonplace tools for high-precision materials synthesis, biochemical sample preparation, and biophysical analysis. Typically, microfluidic systems are constructed in monolithic form by means of microfabrication and, increasingly, by additive techniques. These methods restrict the design and assembly of truly complex systems by placing unnecessary emphasis on complete functional integration of operational elements in a planar environment. Here, we present a solution based on discrete elements that liberates designers to build large-scale microfluidic systems in three dimensions that are modular, diverse, and predictable by simple network analysis techniques. We develop a sample library of standardized components and connectors manufactured using stereolithography. We predict and validate the flow characteristics of these individual components to design and construct a tunable concentration gradient generator with a scalable number of parallel outputs. We show that these systems are rapidly reconfigurable by constructing three variations of a device for generating monodisperse microdroplets in two distinct size regimes and in a high-throughput mode by simple replacement of emulsifier subcircuits. Finally, we demonstrate the capability for active process monitoring by constructing an optical sensing element for detecting water droplets in a fluorocarbon stream and quantifying their size and frequency. By moving away from large-scale integration toward standardized discrete elements, we demonstrate the potential to reduce the practice of designing and assembling complex 3D microfluidic circuits to a methodology comparable to that found in the electronics industry.
Integrated monitoring of wind plant systems
NASA Astrophysics Data System (ADS)
Whelan, Matthew J.; Janoyan, Kerop D.; Qiu, Tong
2008-03-01
Wind power is a renewable source of energy that is quickly gaining acceptance by many. Advanced sensor technologies have currently focused solely on improving wind turbine rotor aerodynamics and increasing of the efficiency of the blade design and concentration. Alternatively, potential improvements in wind plant efficiency may be realized through reduction of reactionary losses of kinetic energy to the structural and substructural systems supporting the turbine mechanics. Investigation of the complete dynamic structural response of the wind plant is proposed using a large-scale, high-rate wireless sensor network. The wireless network enables sensors to be placed across the sizable structure, including the rotating blades, without consideration of cabling issues and the economic burden associated with large spools of measurement cables. A large array of multi-axis accelerometers is utilized to evaluate the modal properties of the system as well as individual members and would enable long-term structural condition monitoring of the wind turbine as well. Additionally, environmental parameters, including wind speed, temperature, and humidity, are wirelessly collected for correlation. Such a wireless system could be integrated with electrical monitoring sensors and actuators and incorporated into a remote multi-turbine centralized plant monitoring and control system.
A CCD experimental platform for large telescope in Antarctica based on FPGA
NASA Astrophysics Data System (ADS)
Zhu, Yuhua; Qi, Yongjun
2014-07-01
The CCD , as a detector , is one of the important components of astronomical telescopes. For a large telescope in Antarctica, a set of CCD detector system with large size, high sensitivity and low noise is indispensable. Because of the extremely low temperatures and unattended, system maintenance and software and hardware upgrade become hard problems. This paper introduces a general CCD controller experiment platform, using Field programmable gate array FPGA, which is, in fact, a large-scale field reconfigurable array. Taking the advantage of convenience to modify the system, construction of driving circuit, digital signal processing module, network communication interface, control algorithm validation, and remote reconfigurable module may realize. With the concept of integrated hardware and software, the paper discusses the key technology of building scientific CCD system suitable for the special work environment in Antarctica, focusing on the method of remote reconfiguration for controller via network and then offering a feasible hardware and software solution.
Microfluidic large-scale integration: the evolution of design rules for biological automation.
Melin, Jessica; Quake, Stephen R
2007-01-01
Microfluidic large-scale integration (mLSI) refers to the development of microfluidic chips with thousands of integrated micromechanical valves and control components. This technology is utilized in many areas of biology and chemistry and is a candidate to replace today's conventional automation paradigm, which consists of fluid-handling robots. We review the basic development of mLSI and then discuss design principles of mLSI to assess the capabilities and limitations of the current state of the art and to facilitate the application of mLSI to areas of biology. Many design and practical issues, including economies of scale, parallelization strategies, multiplexing, and multistep biochemical processing, are discussed. Several microfluidic components used as building blocks to create effective, complex, and highly integrated microfluidic networks are also highlighted.
Virtual interface environment workstations
NASA Technical Reports Server (NTRS)
Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.
1988-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures
NASA Astrophysics Data System (ADS)
Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A.; Park, Jiwoong
2017-10-01
High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides--which represent one- and three-atom-thick two-dimensional building blocks, respectively--have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.
Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures.
Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A; Park, Jiwoong
2017-10-12
High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides-which represent one- and three-atom-thick two-dimensional building blocks, respectively-have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.
Hu, Valerie W.
2012-01-01
Autism spectrum disorders (ASD) are pervasive neurodevelopmental disorders that affect an estimated 1 in 110 individuals. Although there is a strong genetic component associated with these disorders, this review focuses on the multi-factorial nature of ASD and how different genome-wide (genomic) approaches contribute to our understanding of autism. Emphasis is placed on the need to study defined ASD phenotypes as well as to integrate large-scale ‘omics’ data in order to develop a “systems level” perspective of ASD which, in turn, is necessary to allow predictions regarding responses to specific perturbations and interventions. PMID:22497667
Numerical Large Deviation Analysis of the Eigenstate Thermalization Hypothesis
NASA Astrophysics Data System (ADS)
Yoshizawa, Toru; Iyoda, Eiki; Sagawa, Takahiro
2018-05-01
A plausible mechanism of thermalization in isolated quantum systems is based on the strong version of the eigenstate thermalization hypothesis (ETH), which states that all the energy eigenstates in the microcanonical energy shell have thermal properties. We numerically investigate the ETH by focusing on the large deviation property, which directly evaluates the ratio of athermal energy eigenstates in the energy shell. As a consequence, we have systematically confirmed that the strong ETH is indeed true even for near-integrable systems. Furthermore, we found that the finite-size scaling of the ratio of athermal eigenstates is a double exponential for nonintegrable systems. Our result illuminates the universal behavior of quantum chaos, and suggests that a large deviation analysis would serve as a powerful method to investigate thermalization in the presence of the large finite-size effect.
Integrated analysis of the effects of agricultural management on nitrogen fluxes at landscape scale.
Kros, J; Frumau, K F A; Hensen, A; de Vries, W
2011-11-01
The integrated modelling system INITIATOR was applied to a landscape in the northern part of the Netherlands to assess current nitrogen fluxes to air and water and the impact of various agricultural measures on these fluxes, using spatially explicit input data on animal numbers, land use, agricultural management, meteorology and soil. Average model results on NH(3) deposition and N concentrations in surface water appear to be comparable to observations, but the deviation can be large at local scale, despite the use of high resolution data. Evaluated measures include: air scrubbers reducing NH(3) emissions from poultry and pig housing systems, low protein feeding, reduced fertilizer amounts and low-emission stables for cattle. Low protein feeding and restrictive fertilizer application had the largest effect on both N inputs and N losses, resulting in N deposition reductions on Natura 2000 sites of 10% and 12%, respectively. Copyright © 2011 Elsevier Ltd. All rights reserved.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J; Inzé, Dirk; Van de Peer, Yves
2013-03-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein-protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies.
Single-user MIMO system, Painlevé transcendents, and double scaling
NASA Astrophysics Data System (ADS)
Chen, Hongmei; Chen, Min; Blower, Gordon; Chen, Yang
2017-12-01
In this paper, we study a particular Painlevé V (denoted PV) that arises from multi-input-multi-output wireless communication systems. Such PV appears through its intimate relation with the Hankel determinant that describes the moment generating function (MGF) of the Shannon capacity. This originates through the multiplication of the Laguerre weight or the gamma density xαe-x, x > 0, for α > -1 by (1 + x/t)λ with t > 0 a scaling parameter. Here the λ parameter "generates" the Shannon capacity; see Chen, Y. and McKay, M. R. [IEEE Trans. Inf. Theory 58, 4594-4634 (2012)]. It was found that the MGF has an integral representation as a functional of y(t) and y'(t), where y(t) satisfies the "classical form" of PV. In this paper, we consider the situation where n, the number of transmit antennas, (or the size of the random matrix), tends to infinity and the signal-to-noise ratio, P, tends to infinity such that s = 4n2/P is finite. Under such double scaling, the MGF, effectively an infinite determinant, has an integral representation in terms of a "lesser" PIII. We also consider the situations where α =k +1 /2 ,k ∈N , and α ∈ {0, 1, 2, …}, λ ∈ {1, 2, …}, linking the relevant quantity to a solution of the two-dimensional sine-Gordon equation in radial coordinates and a certain discrete Painlevé-II. From the large n asymptotic of the orthogonal polynomials, which appears naturally, we obtain the double scaled MGF for small and large s, together with the constant term in the large s expansion. With the aid of these, we derive a number of cumulants and find that the capacity distribution function is non-Gaussian.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
Large-scale PACS implementation.
Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G
1998-08-01
The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session will focus on effective strategies for a seamless transition. Critical issues involve maintaining a good working relationship with the vendor, cultivating personnel readiness and instituting well-defined support systems. Success depends on the ability to integrate the institutional directives, user expectations and available technologies. A team approach is mandatory for success.
Kushniruk, Andre; Karson, Tom; Moore, Carlton; Kannry, Joseph
2003-01-01
Approaches to the development of information systems in large health care institutions range from prototyping to conventional development of large scale production systems. This paper discusses the development of the SignOut System at Mount Sinai Medical Center, which was designed in 1997 to capture vital resident information. Local need quickly outstripped proposed delays for building a production system and a prototype system quickly became a production system. By the end of 2002 the New SignOut System was built to create an integrated application that was a true production system. In this paper we discuss the design and implementation issues in moving from a prototype to a production system. The production system had a number of advantages, including increased organizational visibility, integration into enterprise resource planning and full time staff for support. However, the prototype allowed for more rapid design and subsequent changes, less training, and equal to or superior help desk support. It is argued that healthcare IT systems may need characteristics of both prototype and production system development to rapidly meet the changing and different needs of healthcare user populations.
Report on phase 1 of the Microprocessor Seminar. [and associated large scale integration
NASA Technical Reports Server (NTRS)
1977-01-01
Proceedings of a seminar on microprocessors and associated large scale integrated (LSI) circuits are presented. The potential for commonality of device requirements, candidate processes and mechanisms for qualifying candidate LSI technologies for high reliability applications, and specifications for testing and testability were among the topics discussed. Various programs and tentative plans of the participating organizations in the development of high reliability LSI circuits are given.
Do large-scale assessments measure students' ability to integrate scientific knowledge?
NASA Astrophysics Data System (ADS)
Lee, Hee-Sun
2010-03-01
Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.
Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen
2008-07-29
This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.
Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen
2008-01-01
This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946
2017-03-07
Integrating multiple sources of pharmacovigilance evidence has the potential to advance the science of safety signal detection and evaluation. In this regard, there is a need for more research on how to integrate multiple disparate evidence sources while making the evidence computable from a knowledge representation perspective (i.e., semantic enrichment). Existing frameworks suggest well-promising outcomes for such integration but employ a rather limited number of sources. In particular, none have been specifically designed to support both regulatory and clinical use cases, nor have any been designed to add new resources and use cases through an open architecture. This paper discusses the architecture and functionality of a system called Large-scale Adverse Effects Related to Treatment Evidence Standardization (LAERTES) that aims to address these shortcomings. LAERTES provides a standardized, open, and scalable architecture for linking evidence sources relevant to the association of drugs with health outcomes of interest (HOIs). Standard terminologies are used to represent different entities. For example, drugs and HOIs are represented in RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms respectively. At the time of this writing, six evidence sources have been loaded into the LAERTES evidence base and are accessible through prototype evidence exploration user interface and a set of Web application programming interface services. This system operates within a larger software stack provided by the Observational Health Data Sciences and Informatics clinical research framework, including the relational Common Data Model for observational patient data created by the Observational Medical Outcomes Partnership. Elements of the Linked Data paradigm facilitate the systematic and scalable integration of relevant evidence sources. The prototype LAERTES system provides useful functionality while creating opportunities for further research. Future work will involve improving the method for normalizing drug and HOI concepts across the integrated sources, aggregated evidence at different levels of a hierarchy of HOI concepts, and developing more advanced user interface for drug-HOI investigations.
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks
Nadarajah, Nandakumaran; Wang, Kan; Choudhury, Mazher
2018-01-01
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network. PMID:29614040
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks.
Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, Peter J G
2018-04-03
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.
Gray, Kathleen
2016-01-01
Health informatics has a major role to play in optimising the management and use of data, information and knowledge in health systems. As health systems undergo digital transformation, it is important to consider informatics approaches not only to curriculum content but also to the design of learning environments and learning activities for health professional learning and development. An example of such an informatics approach is the use of large-scale, integrated public health platforms on the Internet as part of health professional learning and development. This article describes selected examples of such platforms, with a focus on how they may influence the direction of health professional learning and development. Significance for public health The landscape of healthcare systems, public health systems, health research systems and professional education systems is fragmented, with many gaps and silos. More sophistication in the management of health data, information, and knowledge, based on public health informatics expertise, is needed to tackle key issues of prevention, promotion and policy-making. Platform technologies represent an emerging large-scale, highly integrated informatics approach to public health, combining the technologies of Internet, the web, the cloud, social technologies, remote sensing and/or mobile apps into an online infrastructure that can allow more synergies in work within and across these systems. Health professional curricula need updating so that the health workforce has a deep and critical understanding of the way that platform technologies are becoming the foundation of the health sector. PMID:27190977
A systems-based partnership learning model for strengthening primary healthcare
2013-01-01
Background Strengthening primary healthcare systems is vital to improving health outcomes and reducing inequity. However, there are few tools and models available in published literature showing how primary care system strengthening can be achieved on a large scale. Challenges to strengthening primary healthcare (PHC) systems include the dispersion, diversity and relative independence of primary care providers; the scope and complexity of PHC; limited infrastructure available to support population health approaches; and the generally poor and fragmented state of PHC information systems. Drawing on concepts of comprehensive PHC, integrated quality improvement (IQI) methods, system-based research networks, and system-based participatory action research, we describe a learning model for strengthening PHC that addresses these challenges. We describe the evolution of this model within the Australian Aboriginal and Torres Strait Islander primary healthcare context, successes and challenges in its application, and key issues for further research. Discussion IQI approaches combined with system-based participatory action research and system-based research networks offer potential to support program implementation and ongoing learning across a wide scope of primary healthcare practice and on a large scale. The Partnership Learning Model (PLM) can be seen as an integrated model for large-scale knowledge translation across the scope of priority aspects of PHC. With appropriate engagement of relevant stakeholders, the model may be applicable to a wide range of settings. In IQI, and in the PLM specifically, there is a clear role for research in contributing to refining and evaluating existing tools and processes, and in developing and trialling innovations. Achieving an appropriate balance between funding IQI activity as part of routine service delivery and funding IQI related research will be vital to developing and sustaining this type of PLM. Summary This paper draws together several different previously described concepts and extends the understanding of how PHC systems can be strengthened through systematic and partnership-based approaches. We describe a model developed from these concepts and its application in the Australian Indigenous primary healthcare context, and raise questions about sustainability and wider relevance of the model. PMID:24344640
Entropy, pumped-storage and energy system finance
NASA Astrophysics Data System (ADS)
Karakatsanis, Georgios
2015-04-01
Pumped-storage holds a key role for integrating renewable energy units with non-renewable fuel plants into large-scale energy systems of electricity output. An emerging issue is the development of financial engineering models with physical basis to systematically fund energy system efficiency improvements across its operation. A fundamental physically-based economic concept is the Scarcity Rent; which concerns the pricing of a natural resource's scarcity. Specifically, the scarcity rent comprises a fraction of a depleting resource's full price and accumulates to fund its more efficient future use. In an integrated energy system, scarcity rents derive from various resources and can be deposited to a pooled fund to finance the energy system's overall efficiency increase; allowing it to benefit from economies of scale. With pumped-storage incorporated to the system, water upgrades to a hub resource, in which the scarcity rents of all connected energy sources are denominated to. However, as available water for electricity generation or storage is also limited, a scarcity rent upon it is also imposed. It is suggested that scarcity rent generation is reducible to three (3) main factors, incorporating uncertainty: (1) water's natural renewability, (2) the energy system's intermittent components and (3) base-load prediction deviations from actual loads. For that purpose, the concept of entropy is used in order to measure the energy system's overall uncertainty; hence pumped-storage intensity requirements and generated water scarcity rents. Keywords: pumped-storage, integration, energy systems, financial engineering, physical basis, Scarcity Rent, pooled fund, economies of scale, hub resource, uncertainty, entropy Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)
R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove
2016-01-01
The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...
An integrative neural model of social perception, action observation, and theory of mind.
Yang, Daniel Y-J; Rosenblau, Gabriela; Keifer, Cara; Pelphrey, Kevin A
2015-04-01
In the field of social neuroscience, major branches of research have been instrumental in describing independent components of typical and aberrant social information processing, but the field as a whole lacks a comprehensive model that integrates different branches. We review existing research related to the neural basis of three key neural systems underlying social information processing: social perception, action observation, and theory of mind. We propose an integrative model that unites these three processes and highlights the posterior superior temporal sulcus (pSTS), which plays a central role in all three systems. Furthermore, we integrate these neural systems with the dual system account of implicit and explicit social information processing. Large-scale meta-analyses based on Neurosynth confirmed that the pSTS is at the intersection of the three neural systems. Resting-state functional connectivity analysis with 1000 subjects confirmed that the pSTS is connected to all other regions in these systems. The findings presented in this review are specifically relevant for psychiatric research especially disorders characterized by social deficits such as autism spectrum disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.
An integrative neural model of social perception, action observation, and theory of mind
Yang, Daniel Y.-J.; Rosenblau, Gabriela; Keifer, Cara; Pelphrey, Kevin A.
2016-01-01
In the field of social neuroscience, major branches of research have been instrumental in describing independent components of typical and aberrant social information processing, but the field as a whole lacks a comprehensive model that integrates different branches. We review existing research related to the neural basis of three key neural systems underlying social information processing: social perception, action observation, and theory of mind. We propose an integrative model that unites these three processes and highlights the posterior superior temporal sulcus (pSTS), which plays a central role in all three systems. Furthermore, we integrate these neural systems with the dual system account of implicit and explicit social information processing. Large-scale meta-analyses based on Neurosynth confirmed that the pSTS is at the intersection of the three neural systems. Resting-state functional connectivity analysis with 1000 subjects confirmed that the pSTS is connected to all other regions in these systems. The findings presented in this review are specifically relevant for psychiatric research especially disorders characterized by social deficits such as autism spectrum disorder. PMID:25660957
Development of a 3D printer using scanning projection stereolithography
Lee, Michael P.; Cooper, Geoffrey J. T.; Hinkley, Trevor; Gibson, Graham M.; Padgett, Miles J.; Cronin, Leroy
2015-01-01
We have developed a system for the rapid fabrication of low cost 3D devices and systems in the laboratory with micro-scale features yet cm-scale objects. Our system is inspired by maskless lithography, where a digital micromirror device (DMD) is used to project patterns with resolution up to 10 µm onto a layer of photoresist. Large area objects can be fabricated by stitching projected images over a 5cm2 area. The addition of a z-stage allows multiple layers to be stacked to create 3D objects, removing the need for any developing or etching steps but at the same time leading to true 3D devices which are robust, configurable and scalable. We demonstrate the applications of the system by printing a range of micro-scale objects as well as a fully functioning microfluidic droplet device and test its integrity by pumping dye through the channels. PMID:25906401
Drive to miniaturization: integrated optical networks on mobile platforms
NASA Astrophysics Data System (ADS)
Salour, Michael M.; Batayneh, Marwan; Figueroa, Luis
2011-11-01
With rapid growth of the Internet, bandwidth demand for data traffic is continuing to explode. In addition, emerging and future applications are becoming more and more network centric. With the proliferation of data communication platforms and data-intensive applications (e.g. cloud computing), high-bandwidth materials such as video clips dominating the Internet, and social networking tools, a networking technology is very desirable which can scale the Internet's capability (particularly its bandwidth) by two to three orders of magnitude. As the limits of Moore's law are approached, optical mesh networks based on wavelength-division multiplexing (WDM) have the ability to satisfy the large- and scalable-bandwidth requirements of our future backbone telecommunication networks. In addition, this trend is also affecting other special-purpose systems in applications such as mobile platforms, automobiles, aircraft, ships, tanks, and micro unmanned air vehicles (UAVs) which are becoming independent systems roaming the sky while sensing data, processing, making decisions, and even communicating and networking with other heterogeneous systems. Recently, WDM optical technologies have seen advances in its transmission speeds, switching technologies, routing protocols, and control systems. Such advances have made WDM optical technology an appealing choice for the design of future Internet architectures. Along these lines, scientists across the entire spectrum of the network architectures from physical layer to applications have been working on developing devices and communication protocols which can take full advantage of the rapid advances in WDM technology. Nevertheless, the focus has always been on large-scale telecommunication networks that span hundreds and even thousands of miles. Given these advances, we investigate the vision and applicability of integrating the traditionally large-scale WDM optical networks into miniaturized mobile platforms such as UAVs. We explain the benefits of WDM optical technology for these applications. We also describe some of the limitations of WDM optical networks as the size of a vehicle gets smaller, such as in micro-UAVs, and study the miniaturization and communication system limitations in such environments.
Regional crop yield forecasting: a probabilistic approach
NASA Astrophysics Data System (ADS)
de Wit, A.; van Diepen, K.; Boogaard, H.
2009-04-01
Information on the outlook on yield and production of crops over large regions is essential for government services dealing with import and export of food crops, for agencies with a role in food relief, for international organizations with a mandate in monitoring the world food production and trade, and for commodity traders. Process-based mechanistic crop models are an important tool for providing such information, because they can integrate the effect of crop management, weather and soil on crop growth. When properly integrated in a yield forecasting system, the aggregated model output can be used to predict crop yield and production at regional, national and continental scales. Nevertheless, given the scales at which these models operate, the results are subject to large uncertainties due to poorly known weather conditions and crop management. Current yield forecasting systems are generally deterministic in nature and provide no information about the uncertainty bounds on their output. To improve on this situation we present an ensemble-based approach where uncertainty bounds can be derived from the dispersion of results in the ensemble. The probabilistic information provided by this ensemble-based system can be used to quantify uncertainties (risk) on regional crop yield forecasts and can therefore be an important support to quantitative risk analysis in a decision making process.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
NASA Technical Reports Server (NTRS)
Billingsley, F.
1982-01-01
Concerns are expressed about the data handling aspects of system design and about enabling technology for data handling and data analysis. The status, contributing factors, critical issues, and recommendations for investigations are listed for data handling, rectification and registration, and information extraction. Potential supports to individual P.I., research tasks, systematic data system design, and to system operation. The need for an airborne spectrometer class instrument for fundamental research in high spectral and spatial resolution is indicated. Geographic information system formatting and labelling techniques, very large scale integration, and methods for providing multitype data sets must also be developed.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
The Saskatchewan River Basin - a large scale observatory for water security research (Invited)
NASA Astrophysics Data System (ADS)
Wheater, H. S.
2013-12-01
The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and multiple jurisdictions. The SaskRB has therefore been developed as a large scale observatory, now a Regional Hydroclimate Project of the World Climate Research Programme's GEWEX project, and is available to contribute to the emerging North American Water Program. State-of-the-art hydro-ecological experimental sites have been developed for the key biomes, and a river and lake biogeochemical research facility, focussed on impacts of nutrients and exotic chemicals. Data are integrated at SaskRB scale to support the development of improved large scale climate and hydrological modelling products, the development of DSS systems for local, provincial and basin-scale management, and the development of related social science research, engaging stakeholders in the research and exploring their values and priorities for water security. The observatory provides multiple scales of observation and modelling required to develop: a) new climate, hydrological and ecological science and modelling tools to address environmental change in key environments, and their integrated effects and feedbacks at large catchment scale, b) new tools needed to support river basin management under uncertainty, including anthropogenic controls on land and water management and c) the place-based focus for the development of new transdisciplinary science.
Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2013-01-01
Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor
2014-01-01
Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277
Integrated situational awareness for cyber attack detection, analysis, and mitigation
NASA Astrophysics Data System (ADS)
Cheng, Yi; Sagduyu, Yalin; Deng, Julia; Li, Jason; Liu, Peng
2012-06-01
Real-time cyberspace situational awareness is critical for securing and protecting today's enterprise networks from various cyber threats. When a security incident occurs, network administrators and security analysts need to know what exactly has happened in the network, why it happened, and what actions or countermeasures should be taken to quickly mitigate the potential impacts. In this paper, we propose an integrated cyberspace situational awareness system for efficient cyber attack detection, analysis and mitigation in large-scale enterprise networks. Essentially, a cyberspace common operational picture will be developed, which is a multi-layer graphical model and can efficiently capture and represent the statuses, relationships, and interdependencies of various entities and elements within and among different levels of a network. Once shared among authorized users, this cyberspace common operational picture can provide an integrated view of the logical, physical, and cyber domains, and a unique visualization of disparate data sets to support decision makers. In addition, advanced analyses, such as Bayesian Network analysis, will be explored to address the information uncertainty, dynamic and complex cyber attack detection, and optimal impact mitigation issues. All the developed technologies will be further integrated into an automatic software toolkit to achieve near real-time cyberspace situational awareness and impact mitigation in large-scale computer networks.
vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM
ERIC Educational Resources Information Center
Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn
2014-01-01
This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…
Architecting the Safety Assessment of Large-scale Systems Integration
2009-12-01
Electromagnetic Radiation to Ordnance ( HERO ) Hazards of Electromagnetic Radiation to Fuel (HERF) The main reason that this particular safety study... radiation , high voltage electric shocks and explosives safety. 1. Radiation Hazards (RADHAZ) RADHAZ describes the hazards of electromagnetic radiation ...OP3565/NAVAIR 16-1-529 [19 and 20], these hazards are segregated as follows: Hazards of Electromagnetic
James B. McCarter; Sean Healey
2015-01-01
The Forest Carbon Management Framework (ForCaMF) integrates Forest Inventory and Analysis (FIA) plot inventory data, disturbance histories, and carbon response trajectories to develop estimates of disturbance and management effects on carbon pools for the National Forest System. All appropriate FIA inventory plots are simulated using the Forest Vegetation Simulator (...
VLSI Microsystem for Rapid Bioinformatic Pattern Recognition
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Lue, Jaw-Chyng
2009-01-01
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
Pattern Analysis and Decision Support for Cancer through Clinico-Genomic Profiles
NASA Astrophysics Data System (ADS)
Exarchos, Themis P.; Giannakeas, Nikolaos; Goletsis, Yorgos; Papaloukas, Costas; Fotiadis, Dimitrios I.
Advances in genome technology are playing a growing role in medicine and healthcare. With the development of new technologies and opportunities for large-scale analysis of the genome, genomic data have a clear impact on medicine. Cancer prognostics and therapeutics are among the first major test cases for genomic medicine, given that all types of cancer are related with genomic instability. In this paper we present a novel system for pattern analysis and decision support in cancer. The system integrates clinical data from electronic health records and genomic data. Pattern analysis and data mining methods are applied to these integrated data and the discovered knowledge is used for cancer decision support. Through this integration, conclusions can be drawn for early diagnosis, staging and cancer treatment.
Design integration and noise studies for jet STOL aircraft. Volume 1: Program summary
NASA Technical Reports Server (NTRS)
Okeefe, V. O.; Kelley, G. S.
1972-01-01
This program was undertaken to develop, through analysis, design, experimental static testing, wind tunnel testing, and design integration studies, an augmentor wing jet flap configuration for a jet STOL transport aircraft having maximum propulsion and aerodynamic performance with minimum noise generation. The program had three basic elements: (1) static testing of a scale wing section to demonstrate augmentor performance and noise characteristics; (2) two-dimensional wind tunnel testing to determine flight speed effects on performance; and (3) system design and evaluation which integrated the augmentor information obtained into a complete system and ensured that the design was compatible with the requirements for a large STOL transport having a 500-ft sideline noise of 95 PNdB or less. This objective has been achieved.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
Spatial Distribution of Fate and Transport Parameters Using Cxtfit in a Karstified Limestone Model
NASA Astrophysics Data System (ADS)
Toro, J.; Padilla, I. Y.
2017-12-01
Karst environments have a high capacity to transport and store large amounts of water. This makes karst aquifers a productive resource for human consumption and ecological integrity, but also makes them vulnerable to potential contamination of hazardous chemical substances. High heterogeneity and anisotropy of karst aquifer properties make them very difficult to characterize for accurate prediction of contaminant mobility and persistence in groundwater. Current technologies to characterize and quantify flow and transport processes at field-scale is limited by low resolution of spatiotemporal data. To enhance this resolution and provide the essential knowledge of karst groundwater systems, studies at laboratory scale can be conducted. This work uses an intermediate karstified lab-scale physical model (IKLPM) to study fate and transport processes and assess viable tools to characterize heterogeneities in karst systems. Transport experiments are conducted in the IKLPM using step injections of calcium chloride, uranine, and rhodamine wt tracers. Temporal concentration distributions (TCDs) obtained from the experiments are analyzed using the method of moments and CXTFIT to quantify fate and transport parameters in the system at various flow rates. The spatial distribution of the estimated fate and transport parameters for the tracers revealed high variability related to preferential flow heterogeneities and scale dependence. Results are integrated to define spatially-variable transport regions within the system and assess their fate and transport characteristics.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-01-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650
Critical Problems in Very Large Scale Computer Systems
1990-03-31
Srinivas Devadas (617) 253-0454 Thomas F. Knight, Jr. (617) 253-7807 F. Thomson Leighton (617) 253-3662 Charles E. Leiserson (617) 253-5833 Jacob K...Aided Design of Integrated Circuits and Systems, pages 19-29, January 1990. [5] S. Arora, T. Leighton, and B . Maggs. On-line algorithms for path...selection in a non-blocking network. In Proceedings of the 21st Annual ACM Symposium on Theory of Computing, May 1990. To appear. [6] P. Ashar, S. Devadas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komomua, C.; Kroposki, B.; Mooney, D.
2009-01-01
On October 9, 2008, NREL hosted a workshop to provide an opportunity for external stakeholders to offer insights and recommendations on the design and functionality of DOE's planned Energy Systems Infrastructure Facility (ESIF). The goal was to ensure that the planning for the ESIF effectively addresses the most critical barriers to large-scale energy efficiency (EE) and renewable energy (RE) deployment. This technical report documents the ESIF workshop proceedings.
In-orbit assembly mission for the Space Solar Power Station
NASA Astrophysics Data System (ADS)
Cheng, ZhengAi; Hou, Xinbin; Zhang, Xinghua; Zhou, Lu; Guo, Jifeng; Song, Chunlin
2016-12-01
The Space Solar Power Station (SSPS) is a large spacecraft that utilizes solar power in space to supply power to an electric grid on Earth. A large symmetrical integrated concept has been proposed by the China Academy of Space Technology (CAST). Considering its large scale, the SSPS requires a modular design and unitized general interfaces that would be assembled in orbit. Facilities system supporting assembly procedures, which include a Reusable Heavy Lift Launch Vehicle, orbital transfer and space robots, is introduced. An integrated assembly scheme utilizing space robots to realize this platform SSPS concept is presented. This paper tried to give a preliminary discussion about the minimized time and energy cost of the assembly mission under best sequence and route This optimized assembly mission planning allows the SSPS to be built in orbit rapidly, effectively and reliably.
Integrated bioprocess for conversion of gaseous substrates to liquids
Hu, Peng; Chakraborty, Sagar; Kumar, Amit; Woolston, Benjamin; Liu, Hongjuan; Emerson, David; Stephanopoulos, Gregory
2016-01-01
In the quest for inexpensive feedstocks for the cost-effective production of liquid fuels, we have examined gaseous substrates that could be made available at low cost and sufficiently large scale for industrial fuel production. Here we introduce a new bioconversion scheme that effectively converts syngas, generated from gasification of coal, natural gas, or biomass, into lipids that can be used for biodiesel production. We present an integrated conversion method comprising a two-stage system. In the first stage, an anaerobic bioreactor converts mixtures of gases of CO2 and CO or H2 to acetic acid, using the anaerobic acetogen Moorella thermoacetica. The acetic acid product is fed as a substrate to a second bioreactor, where it is converted aerobically into lipids by an engineered oleaginous yeast, Yarrowia lipolytica. We first describe the process carried out in each reactor and then present an integrated system that produces microbial oil, using synthesis gas as input. The integrated continuous bench-scale reactor system produced 18 g/L of C16-C18 triacylglycerides directly from synthesis gas, with an overall productivity of 0.19 g⋅L−1⋅h−1 and a lipid content of 36%. Although suboptimal relative to the performance of the individual reactor components, the presented integrated system demonstrates the feasibility of substantial net fixation of carbon dioxide and conversion of gaseous feedstocks to lipids for biodiesel production. The system can be further optimized to approach the performance of its individual units so that it can be used for the economical conversion of waste gases from steel mills to valuable liquid fuels for transportation. PMID:26951649
Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor
2012-07-16
The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.
Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio
2008-11-24
Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.
NASA Astrophysics Data System (ADS)
Solichin
The importance of accurate measurement of forest biomass in Indonesia has been growing ever since climate change mitigation schemes, particularly the reduction of emissions from deforestation and forest degradation scheme (known as REDD+), were constitutionally accepted by the government of Indonesia. The need for an accurate system of historical and actual forest monitoring has also become more pronounced, as such a system would afford a better understanding of the role of forests in climate change and allow for the quantification of the impact of activities implemented to reduce greenhouse gas emissions. The aim of this study was to enhance the accuracy of estimations of carbon stocks and to monitor emissions in tropical forests. The research encompassed various scales (from trees and stands to landscape-sized scales) and a wide range of aspects, from evaluation and development of allometric equations to exploration of the potential of existing forest inventory databases and evaluation of cutting-edge technology for non-destructive sampling and accurate forest biomass mapping over large areas. In this study, I explored whether accuracy--especially regarding the identification and reduction of bias--of forest aboveground biomass (AGB) estimates in Indonesia could be improved through (1) development and refinement of allometric equations for major forest types, (2) integration of existing large forest inventory datasets, (3) assessing nondestructive sampling techniques for tree AGB measurement, and (4) landscape-scale mapping of AGB and forest cover using lidar. This thesis provides essential foundations to improve the estimation of forest AGB at tree scale through development of new AGB equations for several major forest types in Indonesia. I successfully developed new allometric equations using large datasets from various forest types that enable us to estimate tree aboveground biomass for both forest type specific and generic equations. My models outperformed the existing local equations, with lower bias and higher precision of the AGB estimates. This study also highlights the potential advantages and challenges of using terrestrial lidar and the acoustic velocity tool for non-destructive sampling of tree biomass to enable more sample collection without the felling of trees. Further, I explored whether existing forest inventories and permanent sample plot datasets can be integrated into Indonesia's existing carbon accounting system. My investigation of these existing datasets found that through quality assurance tests these datasets are essential to be integrated into national and provincial forest monitoring and carbon accounting systems. Integration of this information would eventually improve the accuracy of the estimates of forest carbon stocks, biomass growth, mortality and emission factors from deforestation and forest degradation. At landscape scale, this study demonstrates the capability of airborne lidar for forest monitoring and forest cover classification in tropical peat swamp ecosystems. The mapping application using airborne lidar showed a more accurate and precise classification of land and forest cover when compared with mapping using optical and active sensors. To reduce the cost of lidar acquisition, this study assessed the optimum lidar return density for forest monitoring. I found that the density of lidar return could be reduced to at least 1 return per 4 m2. Overall, this study provides essential scientific background to improve the accuracy of forest AGB estimates. Therefore, the described results and techniques should be integrated into the existing monitoring systems to assess emission reduction targets and the impact of REDD+ implementation.
Goldman, Alyssa W.; Burmeister, Yvonne; Cesnulevicius, Konstantin; Herbert, Martha; Kane, Mary; Lescheid, David; McCaffrey, Timothy; Schultz, Myron; Seilheimer, Bernd; Smit, Alta; St. Laurent, Georges; Berman, Brian
2015-01-01
Bioregulatory systems medicine (BrSM) is a paradigm that aims to advance current medical practices. The basic scientific and clinical tenets of this approach embrace an interconnected picture of human health, supported largely by recent advances in systems biology and genomics, and focus on the implications of multi-scale interconnectivity for improving therapeutic approaches to disease. This article introduces the formal incorporation of these scientific and clinical elements into a cohesive theoretical model of the BrSM approach. The authors review this integrated body of knowledge and discuss how the emergent conceptual model offers the medical field a new avenue for extending the armamentarium of current treatment and healthcare, with the ultimate goal of improving population health. PMID:26347656
A Network Model of the Emotional Brain.
Pessoa, Luiz
2017-05-01
Emotion is often understood in terms of a circumscribed set of cortical and subcortical brain regions. I propose, instead, that emotion should be understood in terms of large-scale network interactions spanning the entire neuroaxis. I describe multiple anatomical and functional principles of brain organization that lead to the concept of 'functionally integrated systems', cortical-subcortical systems that anchor the organization of emotion in the brain. The proposal is illustrated by describing the cortex-amygdala integrated system and how it intersects with systems involving the ventral striatum/accumbens, septum, hippocampus, hypothalamus, and brainstem. The important role of the thalamus is also highlighted. Overall, the model clarifies why the impact of emotion is wide-ranging, and how emotion is interlocked with perception, cognition, motivation, and action. Copyright © 2017 Elsevier Ltd. All rights reserved.
Intelligent systems engineering methodology
NASA Technical Reports Server (NTRS)
Fouse, Scott
1990-01-01
An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.
Modular microfluidic systems using reversibly attached PDMS fluid control modules
NASA Astrophysics Data System (ADS)
Skafte-Pedersen, Peder; Sip, Christopher G.; Folch, Albert; Dufva, Martin
2013-05-01
The use of soft lithography-based poly(dimethylsiloxane) (PDMS) valve systems is the dominating approach for high-density microscale fluidic control. Integrated systems enable complex flow control and large-scale integration, but lack modularity. In contrast, modular systems are attractive alternatives to integration because they can be tailored for different applications piecewise and without redesigning every element of the system. We present a method for reversibly coupling hard materials to soft lithography defined systems through self-aligning O-ring features thereby enabling easy interfacing of complex-valve-based systems with simpler detachable units. Using this scheme, we demonstrate the seamless interfacing of a PDMS-based fluid control module with hard polymer chips. In our system, 32 self-aligning O-ring features protruding from the PDMS fluid control module form chip-to-control module interconnections which are sealed by tightening four screws. The interconnection method is robust and supports complex fluidic operations in the reversibly attached passive chip. In addition, we developed a double-sided molding method for fabricating PDMS devices with integrated through-holes. The versatile system facilitates a wide range of applications due to the modular approach, where application specific passive chips can be readily attached to the flow control module.
NASA Technical Reports Server (NTRS)
Modesitt, Kenneth L.
1987-01-01
Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed.
NASA Technical Reports Server (NTRS)
Shiva, S. G.; Shah, A. M.
1980-01-01
The details of digital systems can be conveniently input into the design automation system by means of hardware description language (HDL). The computer aided design and test (CADAT) system at NASA MSFC is used for the LSI design. The digital design language (DDL) was selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. Problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system are addressed.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Maurer, S A; Kussmann, J; Ochsenfeld, C
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
Advanced spacecraft: What will they look like and why
NASA Technical Reports Server (NTRS)
Price, Humphrey W.
1990-01-01
The next century of spaceflight will witness an expansion in the physical scale of spacecraft, from the extreme of the microspacecraft to the very large megaspacecraft. This will respectively spawn advances in highly integrated and miniaturized components, and also advances in lightweight structures, space fabrication, and exotic control systems. Challenges are also presented by the advent of advanced propulsion systems, many of which require controlling and directing hot plasma, dissipating large amounts of waste heat, and handling very high radiation sources. Vehicle configuration studies for a number of theses types of advanced spacecraft were performed, and some of them are presented along with the rationale for their physical layouts.
Nakazato, Kazuo
2014-03-28
By integrating chemical reactions on a large-scale integration (LSI) chip, new types of device can be created. For biomedical applications, monolithically integrated sensor arrays for potentiometric, amperometric and impedimetric sensing of biomolecules have been developed. The potentiometric sensor array detects pH and redox reaction as a statistical distribution of fluctuations in time and space. For the amperometric sensor array, a microelectrode structure for measuring multiple currents at high speed has been proposed. The impedimetric sensor array is designed to measure impedance up to 10 MHz. The multimodal sensor array will enable synthetic analysis and make it possible to standardize biosensor chips. Another approach is to create new functional devices by integrating molecular systems with LSI chips, for example image sensors that incorporate biological materials with a sensor array. The quantum yield of the photoelectric conversion of photosynthesis is 100%, which is extremely difficult to achieve by artificial means. In a recently developed process, a molecular wire is plugged directly into a biological photosynthetic system to efficiently conduct electrons to a gold electrode. A single photon can be detected at room temperature using such a system combined with a molecular single-electron transistor.
Visualization and Analysis of Multi-scale Land Surface Products via Giovanni Portals
NASA Technical Reports Server (NTRS)
Shen, Suhung; Kempler, Steven J.; Gerasimov, Irina V.
2013-01-01
Large volumes of MODIS land data products at multiple spatial resolutions have been integrated into the Giovanni online analysis system to support studies on land cover and land use changes,focused on the Northern Eurasia and Monsoon Asia regions through the LCLUC program. Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), providing a simple and intuitive way to visualize, analyze, and access Earth science remotely-sensed and modeled data.Customized Giovanni Web portals (Giovanni-NEESPI andGiovanni-MAIRS) have been created to integrate land, atmospheric,cryospheric, and societal products, enabling researchers to do quick exploration and basic analyses of land surface changes, and their relationships to climate, at global and regional scales. This presentation shows a sample Giovanni portal page, lists selected data products in the system, and illustrates potential analyses with imagesand time-series at global and regional scales, focusing on climatology and anomaly analysis. More information is available at the GES DISCMAIRS data support project portal: http:disc.sci.gsfc.nasa.govmairs.
Study on data model of large-scale urban and rural integrated cadastre
NASA Astrophysics Data System (ADS)
Peng, Liangyong; Huang, Quanyi; Gao, Dequan
2008-10-01
Urban and Rural Integrated Cadastre (URIC) has been the subject of great interests for modern cadastre management. It is highly desirable to develop a rational data model for establishing an information system of URIC. In this paper, firstly, the old cadastral management mode in China was introduced, the limitation was analyzed, and the conception of URIC and its development course in China were described. Afterwards, based on the requirements of cadastre management in developed region, the goal of URIC and two key ideas for realizing URIC were proposed. Then, conceptual management mode was studied and a data model of URIC was designed. At last, based on the raw data of land use survey with a scale of 1:1000 and urban conversional cadastral survey with a scale of 1:500 in Jiangyin city, a well-defined information system of URIC was established according to the data model and an uniform management of land use and use right and landownership in urban and rural area was successfully realized. Its feasibility and practicability was well proved.
Lean Big Data integration in systems biology and systems pharmacology.
Ma'ayan, Avi; Rouillard, Andrew D; Clark, Neil R; Wang, Zichen; Duan, Qiaonan; Kou, Yan
2014-09-01
Data sets from recent large-scale projects can be integrated into one unified puzzle that can provide new insights into how drugs and genetic perturbations applied to human cells are linked to whole-organism phenotypes. Data that report how drugs affect the phenotype of human cell lines and how drugs induce changes in gene and protein expression in human cell lines can be combined with knowledge about human disease, side effects induced by drugs, and mouse phenotypes. Such data integration efforts can be achieved through the conversion of data from the various resources into single-node-type networks, gene-set libraries, or multipartite graphs. This approach can lead us to the identification of more relationships between genes, drugs, and phenotypes as well as benchmark computational and experimental methods. Overall, this lean 'Big Data' integration strategy will bring us closer toward the goal of realizing personalized medicine. Copyright © 2014 Elsevier Ltd. All rights reserved.
Stepwise Connectivity of the Modal Cortex Reveals the Multimodal Organization of the Human Brain
Sepulcre, Jorge; Sabuncu, Mert R.; Yeo, Thomas B.; Liu, Hesheng; Johnson, Keith A.
2012-01-01
How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipito-temporal, prefrontal and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called Stepwise Functional Connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain. PMID:22855814
Musical expertise is related to altered functional connectivity during audiovisual integration
Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo
2015-01-01
The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305
Feasibility of Integrated Menu Recommendation and Self-Order System for Small-Scale Restaurants
NASA Astrophysics Data System (ADS)
Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki
2010-10-01
In recent years, point of sales (POS) systems with order function have been developed for restaurants. Since expensive apparatus and system are required for installing POS systems, usually only large-scale restaurant chains can afford to introduce them. In this research, we consider the POS management in a restaurant, which cooperates with an automatic order function by using a personal digital device aiming at the safety of the food, pursuit of service, and further operational efficiency improvements, such as foods management, accounting treatment, and ordering work. In traditional POS systems, information recommendation technology is not taken into consideration. We realize the recommendation of a menu according to the user's preference using rough sets and menu planning based on stock status by applying information recommendation technology. Therefore, we believe that this system can be used in comfort with regard to freshness of foods, allergy, diabetes, etc. Furthermore, due to the reduction of the personnel expenses by an operational efficiency improvement such technology becomes even feasible for small-scale stores.
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
Simulation of a spiking neuron circuit using carbon nanotube transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najari, Montassar, E-mail: malnjar@jazanu.edu.sa; IKCE unit, Jazan University, Jazan; El-Grour, Tarek, E-mail: grour-tarek@hotmail.fr
2016-06-10
Neuromorphic engineering is related to the existing analogies between the physical semiconductor VLSI (Very Large Scale Integration) and biophysics. Neuromorphic systems propose to reproduce the structure and function of biological neural systems for transferring their calculation capacity on silicon. Since the innovative research of Carver Mead, the neuromorphic engineering continues to emerge remarkable implementation of biological system. This work presents a simulation of an elementary neuron cell with a carbon nanotube transistor (CNTFET) based technology. The model of the cell neuron which was simulated is called integrate and fire (I&F) model firstly introduced by G. Indiveri in 2009. This circuitmore » has been simulated with CNTFET technology using ADS environment to verify the neuromorphic activities in terms of membrane potential. This work has demonstrated the efficiency of this emergent device; i.e CNTFET on the design of such architecture in terms of power consumption and technology integration density.« less
Thermal-Acoustic Analysis of a Metallic Integrated Thermal Protection System Structure
NASA Technical Reports Server (NTRS)
Behnke, Marlana N.; Sharma, Anurag; Przekop, Adam; Rizzi, Stephen A.
2010-01-01
A study is undertaken to investigate the response of a representative integrated thermal protection system structure under combined thermal, aerodynamic pressure, and acoustic loadings. A two-step procedure is offered and consists of a heat transfer analysis followed by a nonlinear dynamic analysis under a combined loading environment. Both analyses are carried out in physical degrees-of-freedom using implicit and explicit solution techniques available in the Abaqus commercial finite-element code. The initial study is conducted on a reduced-size structure to keep the computational effort contained while validating the procedure and exploring the effects of individual loadings. An analysis of a full size integrated thermal protection system structure, which is of ultimate interest, is subsequently presented. The procedure is demonstrated to be a viable approach for analysis of spacecraft and hypersonic vehicle structures under a typical mission cycle with combined loadings characterized by largely different time-scales.
Mountain hydrology of the western United States
Bales, Roger C.; Molotch, Noah P.; Painter, Thomas H; Dettinger, Michael D.; Rice, Robert; Dozier, Jeff
2006-01-01
Climate change and climate variability, population growth, and land use change drive the need for new hydrologic knowledge and understanding. In the mountainous West and other similar areas worldwide, three pressing hydrologic needs stand out: first, to better understand the processes controlling the partitioning of energy and water fluxes within and out from these systems; second, to better understand feedbacks between hydrological fluxes and biogeochemical and ecological processes; and, third, to enhance our physical and empirical understanding with integrated measurement strategies and information systems. We envision an integrative approach to monitoring, modeling, and sensing the mountain environment that will improve understanding and prediction of hydrologic fluxes and processes. Here extensive monitoring of energy fluxes and hydrologic states are needed to supplement existing measurements, which are largely limited to streamflow and snow water equivalent. Ground‐based observing systems must be explicitly designed for integration with remotely sensed data and for scaling up to basins and whole ranges.
Challenges and opportunities of power systems from smart homes to super-grids.
Kuhn, Philipp; Huber, Matthias; Dorfner, Johannes; Hamacher, Thomas
2016-01-01
The world's power systems are facing a structural change including liberalization of markets and integration of renewable energy sources. This paper describes the challenges that lie ahead in this process and points out avenues for overcoming different problems at different scopes, ranging from individual homes to international super-grids. We apply energy system models at those different scopes and find a trade-off between technical and social complexity. Small-scale systems would require technological breakthroughs, especially for storage, but individual agents can and do already start to build and operate such systems. In contrast, large-scale systems could potentially be more efficient from a techno-economic point of view. However, new political frameworks are required that enable long-term cooperation among sovereign entities through mutual trust. Which scope first achieves its breakthrough is not clear yet.
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-01-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600
Research on precision grinding technology of large scale and ultra thin optics
NASA Astrophysics Data System (ADS)
Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua
2018-03-01
The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-06-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.
An integrated assessment of location-dependent scaling for microalgae biofuel production facilities
Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...
2014-06-19
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less
Universality of local dissipation scales in buoyancy-driven turbulence.
Zhou, Quan; Xia, Ke-Qing
2010-03-26
We report an experimental investigation of the local dissipation scale field eta in turbulent thermal convection. Our results reveal two types of universality of eta. The first one is that, for the same flow, the probability density functions (PDFs) of eta are insensitive to turbulent intensity and large-scale inhomogeneity and anisotropy of the system. The second is that the small-scale dissipation dynamics in buoyancy-driven turbulence can be described by the same models developed for homogeneous and isotropic turbulence. However, the exact functional form of the PDF of the local dissipation scale is not universal with respect to different types of flows, but depends on the integral-scale velocity boundary condition, which is found to have an exponential, rather than Gaussian, distribution in turbulent Rayleigh-Bénard convection.
Large-scale, high-density (up to 512 channels) recording of local circuits in behaving animals
Berényi, Antal; Somogyvári, Zoltán; Nagy, Anett J.; Roux, Lisa; Long, John D.; Fujisawa, Shigeyoshi; Stark, Eran; Leonardo, Anthony; Harris, Timothy D.
2013-01-01
Monitoring representative fractions of neurons from multiple brain circuits in behaving animals is necessary for understanding neuronal computation. Here, we describe a system that allows high-channel-count recordings from a small volume of neuronal tissue using a lightweight signal multiplexing headstage that permits free behavior of small rodents. The system integrates multishank, high-density recording silicon probes, ultraflexible interconnects, and a miniaturized microdrive. These improvements allowed for simultaneous recordings of local field potentials and unit activity from hundreds of sites without confining free movements of the animal. The advantages of large-scale recordings are illustrated by determining the electroanatomic boundaries of layers and regions in the hippocampus and neocortex and constructing a circuit diagram of functional connections among neurons in real anatomic space. These methods will allow the investigation of circuit operations and behavior-dependent interregional interactions for testing hypotheses of neural networks and brain function. PMID:24353300
Huang, Yongjun; Flores, Jaime Gonzalo Flor; Cai, Ziqiang; Yu, Mingbin; Kwong, Dim-Lee; Wen, Guangjun; Churchill, Layne; Wong, Chee Wei
2017-06-29
For the sensitive high-resolution force- and field-sensing applications, the large-mass microelectromechanical system (MEMS) and optomechanical cavity have been proposed to realize the sub-aN/Hz 1/2 resolution levels. In view of the optomechanical cavity-based force- and field-sensors, the optomechanical coupling is the key parameter for achieving high sensitivity and resolution. Here we demonstrate a chip-scale optomechanical cavity with large mass which operates at ≈77.7 kHz fundamental mode and intrinsically exhibiting large optomechanical coupling of 44 GHz/nm or more, for both optical resonance modes. The mechanical stiffening range of ≈58 kHz and a more than 100 th -order harmonics are obtained, with which the free-running frequency instability is lower than 10 -6 at 100 ms integration time. Such results can be applied to further improve the sensing performance of the optomechanical inspired chip-scale sensors.
Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals
Schmittfull, Marcel; Vlah, Zvonimir
2016-11-28
Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less
Sensing systems using chip-based spectrometers
NASA Astrophysics Data System (ADS)
Nitkowski, Arthur; Preston, Kyle J.; Sherwood-Droz, Nicolás.; Behr, Bradford B.; Bismilla, Yusuf; Cenko, Andrew T.; DesRoches, Brandon; Meade, Jeffrey T.; Munro, Elizabeth A.; Slaa, Jared; Schmidt, Bradley S.; Hajian, Arsen R.
2014-06-01
Tornado Spectral Systems has developed a new chip-based spectrometer called OCTANE, the Optical Coherence Tomography Advanced Nanophotonic Engine, built using a planar lightwave circuit with integrated waveguides fabricated on a silicon wafer. While designed for spectral domain optical coherence tomography (SD-OCT) systems, the same miniaturized technology can be applied to many other spectroscopic applications. The field of integrated optics enables the design of complex optical systems which are monolithically integrated on silicon chips. The form factors of these systems can be significantly smaller, more robust and less expensive than their equivalent free-space counterparts. Fabrication techniques and material systems developed for microelectronics have previously been adapted for integrated optics in the telecom industry, where millions of chip-based components are used to power the optical backbone of the internet. We have further adapted the photonic technology platform for spectroscopy applications, allowing unheard-of economies of scale for these types of optical devices. Instead of changing lenses and aligning systems, these devices are accurately designed programmatically and are easily customized for specific applications. Spectrometers using integrated optics have large advantages in systems where size, robustness and cost matter: field-deployable devices, UAVs, UUVs, satellites, handheld scanning and more. We will discuss the performance characteristics of our chip-based spectrometers and the type of spectral sensing applications enabled by this technology.
Integrating optical finger motion tracking with surface touch events.
MacRitchie, Jennifer; McPherson, Andrew P
2015-01-01
This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction.
Integrating optical finger motion tracking with surface touch events
MacRitchie, Jennifer; McPherson, Andrew P.
2015-01-01
This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction. PMID:26082732
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.
With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less
An innovative integrated oxidation ditch with vertical circle (IODVC) for wastewater treatment.
Xia, Shi-bin; Liu, Jun-xin
2004-01-01
The oxidation ditch process is economic and efficient for wastewater treatment, but its application is limited in case where land is costly due to its large land area required. An innovative integrated oxidation ditch with vertical circle (IODVC) system was developed to treat domestic and industrial wastewater aiming to save land area. The new system consists of a single-channel divided into two ditches(the top one and the bottom one by a plate), a brush, and an innovative integral clarifier. Different from the horizontal circle of the conventional oxidation ditch, the flow of IODVC system recycles from the top zone to the bottom zone in the vertical circle as the brush is running, and then the IODVC saved land area required by about 50% compared with a conventional oxidation ditch with an intrachannel clarifier. The innovative integral clarifier is effective for separation of liquid and solids, and is preferably positioned at the opposite end of the brush in the ditch. It does not affect the hydrodynamic characteristics of the mixed liquor in the ditch, and the sludge can automatically return to the down ditch without any pump. In this study, experiments of domestic and dye wastewater treatment were carried out in bench scale and in full scale, respectively. Results clearly showed that the IODVC efficiently removed pollutants in the wastewaters, i.e., the average of COD removals for domestic and dye wastewater treatment were 95% and 90%, respectively, and that the IODVC process may provide a cost effective way for full scale dye wastewater treatment.
Design of energy storage system to improve inertial response for large scale PV generation
Wang, Xiaoyu; Yue, Meng
2016-07-01
With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated usingmore » PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.« less
Abraham, Alyson; Housel, Lisa M; Lininger, Christianna N; Bock, David C; Jou, Jeffrey; Wang, Feng; West, Alan C; Marschilok, Amy C; Takeuchi, Kenneth J; Takeuchi, Esther S
2016-06-22
Electric energy storage systems such as batteries can significantly impact society in a variety of ways, including facilitating the widespread deployment of portable electronic devices, enabling the use of renewable energy generation for local off grid situations and providing the basis of highly efficient power grids integrated with energy production, large stationary batteries, and the excess capacity from electric vehicles. A critical challenge for electric energy storage is understanding the basic science associated with the gap between the usable output of energy storage systems and their theoretical energy contents. The goal of overcoming this inefficiency is to achieve more useful work (w) and minimize the generation of waste heat (q). Minimization of inefficiency can be approached at the macro level, where bulk parameters are identified and manipulated, with optimization as an ultimate goal. However, such a strategy may not provide insight toward the complexities of electric energy storage, especially the inherent heterogeneity of ion and electron flux contributing to the local resistances at numerous interfaces found at several scale lengths within a battery. Thus, the ability to predict and ultimately tune these complex systems to specific applications, both current and future, demands not just parametrization at the bulk scale but rather specific experimentation and understanding over multiple length scales within the same battery system, from the molecular scale to the mesoscale. Herein, we provide a case study examining the insights and implications from multiscale investigations of a prospective battery material, Fe3O4.
NASA Technical Reports Server (NTRS)
McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.
2012-01-01
The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.
MEMOSys: Bioinformatics platform for genome-scale metabolic models
2011-01-01
Background Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. Results MEMOSys (MEtabolic MOdel research and development System) is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. Conclusions We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys. PMID:21276275
An innovative large scale integration of silicon nanowire-based field effect transistors
NASA Astrophysics Data System (ADS)
Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.
2018-05-01
Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.
PGen: large-scale genomic variations analysis workflow and browser in SoyKB.
Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti
2016-10-06
With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most efficient analysis of soybean data using thorough testing and validation. This research serves as an example of best practices for development of genomics data analysis workflows by integrating remote HPC resources and efficient data management with ease of use for biological users. PGen workflow can also be easily customized for analysis of data in other species.
System Dynamics Modeling of Transboundary Systems: The Bear River Basin Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerald Sehlke; Jake Jacobson
2005-09-01
System dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, system dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The Idaho National Engineering and Environmental Laboratory, a multi-purpose national laboratory managed by the Department of Energy, has developed a systems dynamics model in order to evaluate its utility for modeling large complex hydrological systems. We modeled the Bear River Basin, a transboundary basin that includes portions of Idaho,more » Utah and Wyoming. We found that system dynamics modeling is very useful for integrating surface water and groundwater data and for simulating the interactions between these sources within a given basin. In addition, we also found system dynamics modeling is useful for integrating complex hydrologic data with other information (e.g., policy, regulatory and management criteria) to produce a decision support system. Such decision support systems can allow managers and stakeholders to better visualize the key hydrologic elements and management constraints in the basin, which enables them to better understand the system via the simulation of multiple “what-if” scenarios. Although system dynamics models can be developed to conduct traditional hydraulic/hydrologic surface water or groundwater modeling, we believe that their strength lies in their ability to quickly evaluate trends and cause–effect relationships in large-scale hydrological systems; for integrating disparate data; for incorporating output from traditional hydraulic/hydrologic models; and for integration of interdisciplinary data, information and criteria to support better management decisions.« less
System Dynamics Modeling of Transboundary Systems: the Bear River Basin Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerald Sehlke; Jacob J. Jacobson
2005-09-01
System dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, system dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The Idaho National Engineering and Environmental Laboratory, a multi-purpose national laboratory managed by the Department of Energy, has developed a systems dynamics model in order to evaluate its utility for modeling large complex hydrological systems. We modeled the Bear River Basin, a transboundary basin that includes portions of Idaho,more » Utah and Wyoming. We found that system dynamics modeling is very useful for integrating surface water and ground water data and for simulating the interactions between these sources within a given basin. In addition, we also found system dynamics modeling is useful for integrating complex hydrologic data with other information (e.g., policy, regulatory and management criteria) to produce a decision support system. Such decision support systems can allow managers and stakeholders to better visualize the key hydrologic elements and management constraints in the basin, which enables them to better understand the system via the simulation of multiple “what-if” scenarios. Although system dynamics models can be developed to conduct traditional hydraulic/hydrologic surface water or ground water modeling, we believe that their strength lies in their ability to quickly evaluate trends and cause–effect relationships in large-scale hydrological systems; for integrating disparate data; for incorporating output from traditional hydraulic/hydrologic models; and for integration of interdisciplinary data, information and criteria to support better management decisions.« less
NASA Technical Reports Server (NTRS)
McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.
2015-01-01
This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.
Miniaturized integration of a fluorescence microscope
Ghosh, Kunal K.; Burns, Laurie D.; Cocker, Eric D.; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J.
2013-01-01
The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals towards relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including semiconductor light source and sensor. This device enables high-speed cellular-level imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens. PMID:21909102
Miniaturized integration of a fluorescence microscope.
Ghosh, Kunal K; Burns, Laurie D; Cocker, Eric D; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J
2011-09-11
The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals for relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including a semiconductor light source and sensor. This device enables high-speed cellular imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens.
An informal paper on large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Ho, Y. C.
1975-01-01
Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.
Forest-fire model as a supercritical dynamic model in financial systems
NASA Astrophysics Data System (ADS)
Lee, Deokjae; Kim, Jae-Young; Lee, Jeho; Kahng, B.
2015-02-01
Recently large-scale cascading failures in complex systems have garnered substantial attention. Such extreme events have been treated as an integral part of self-organized criticality (SOC). Recent empirical work has suggested that some extreme events systematically deviate from the SOC paradigm, requiring a different theoretical framework. We shed additional theoretical light on this possibility by studying financial crisis. We build our model of financial crisis on the well-known forest fire model in scale-free networks. Our analysis shows a nontrivial scaling feature indicating supercritical behavior, which is independent of system size. Extreme events in the supercritical state result from bursting of a fat bubble, seeds of which are sown by a protracted period of a benign financial environment with few shocks. Our findings suggest that policymakers can control the magnitude of financial meltdowns by keeping the economy operating within reasonable duration of a benign environment.
Identification of Curie temperature distributions in magnetic particulate systems
NASA Astrophysics Data System (ADS)
Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.
2017-09-01
This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.
An integrated network of Arabidopsis growth regulators and its use for gene prioritization.
Sabaghian, Ehsan; Drebert, Zuzanna; Inzé, Dirk; Saeys, Yvan
2015-12-01
Elucidating the molecular mechanisms that govern plant growth has been an important topic in plant research, and current advances in large-scale data generation call for computational tools that efficiently combine these different data sources to generate novel hypotheses. In this work, we present a novel, integrated network that combines multiple large-scale data sources to characterize growth regulatory genes in Arabidopsis, one of the main plant model organisms. The contributions of this work are twofold: first, we characterized a set of carefully selected growth regulators with respect to their connectivity patterns in the integrated network, and, subsequently, we explored to which extent these connectivity patterns can be used to suggest new growth regulators. Using a large-scale comparative study, we designed new supervised machine learning methods to prioritize growth regulators. Our results show that these methods significantly improve current state-of-the-art prioritization techniques, and are able to suggest meaningful new growth regulators. In addition, the integrated network is made available to the scientific community, providing a rich data source that will be useful for many biological processes, not necessarily restricted to plant growth.
Ten-channel InP-based large-scale photonic integrated transmitter fabricated by SAG technology
NASA Astrophysics Data System (ADS)
Zhang, Can; Zhu, Hongliang; Liang, Song; Cui, Xiao; Wang, Huitao; Zhao, Lingjuan; Wang, Wei
2014-12-01
A 10-channel InP-based large-scale photonic integrated transmitter was fabricated by selective area growth (SAG) technology combined with butt-joint regrowth (BJR) technology. The SAG technology was utilized to fabricate the electroabsorption modulated distributed feedback (DFB) laser (EML) arrays at the same time. The design of coplanar electrodes for electroabsorption modulator (EAM) was used for the flip-chip bonding package. The lasing wavelength of DFB laser could be tuned by the integrated micro-heater to match the ITU grids, which only needs one electrode pad. The average output power of each channel is 250 μW with an injection current of 200 mA. The static extinction ratios of the EAMs for 10 channels tested are ranged from 15 to 27 dB with a reverse bias of 6 V. The frequencies of 3 dB bandwidth of the chip for each channel are around 14 GHz. The novel design and simple fabrication process show its enormous potential in reducing the cost of large-scale photonic integrated circuit (LS-PIC) transmitter with high chip yields.
Taylor, Kimberly A.; Short, A.
2009-01-01
Integrating science into resource management activities is a goal of the CALFED Bay-Delta Program, a multi-agency effort to address water supply reliability, ecological condition, drinking water quality, and levees in the Sacramento-San Joaquin Delta of northern California. Under CALFED, many different strategies were used to integrate science, including interaction between the research and management communities, public dialogues about scientific work, and peer review. This paper explores ways science was (and was not) integrated into CALFED's management actions and decision systems through three narratives describing different patterns of scientific integration and application in CALFED. Though a collaborative process and certain organizational conditions may be necessary for developing new understandings of the system of interest, we find that those factors are not sufficient for translating that knowledge into management actions and decision systems. We suggest that the application of knowledge may be facilitated or hindered by (1) differences in the objectives, approaches, and cultures of scientists operating in the research community and those operating in the management community and (2) other factors external to the collaborative process and organization.
A Discussion of Issues in Integrity Constraint Monitoring
NASA Technical Reports Server (NTRS)
Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.
1998-01-01
In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.
Ten Years of Analyzing the Duck Chart: How an NREL Discovery in 2008 Is
examined how to plan for future large-scale integration of solar photovoltaic (PV) generation on the result, PV was deployed more widely, and system operators became increasingly concerned about how solar emerging energy and environmental policy initiatives pushing for higher levels of solar PV deployment. As a
The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook
NASA Technical Reports Server (NTRS)
Edge, T. M.
1978-01-01
Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-09
The 21st Century Power Partnership (21CPP) aims to accelerate the global transformation of power systems. The Power Partnership is a multilateral effort of the Clean Energy Ministerial (CEM) and serves as a platform for public-private collaboration to advance integrated policy, regulatory, financial, and technical solutions for the large-scale deployment of renewable energy in combination with deep energy efficiency and smart grid solutions. This fact sheet details the 21CPP's work in India.
Price, C; Briggs, K; Brown, P J
1999-01-01
Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.
NASA Technical Reports Server (NTRS)
Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)
1986-01-01
The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.
Probabilistic structural analysis methods for select space propulsion system components
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J.; Inzé, Dirk; Van de Peer, Yves
2013-01-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein–protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies. PMID:23532071
NASA Astrophysics Data System (ADS)
Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles
2016-01-01
The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.
Technical integration of hippocampus, Basal Ganglia and physical models for spatial navigation.
Fox, Charles; Humphries, Mark; Mitchinson, Ben; Kiss, Tamas; Somogyvari, Zoltan; Prescott, Tony
2009-01-01
Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings.
UK Environmental Prediction - integration and evaluation at the convective scale
NASA Astrophysics Data System (ADS)
Fallmann, Joachim; Lewis, Huw; Castillo, Juan Manuel; Pearson, David; Harris, Chris; Saulter, Andy; Bricheno, Lucy; Blyth, Eleanor
2016-04-01
It has long been understood that accurate prediction and warning of the impacts of severe weather requires an integrated approach to forecasting. For example, high impact weather is typically manifested through various interactions and feedbacks between different components of the Earth System. Damaging high winds can lead to significant damage from the large waves and storm surge along coastlines. The impact of intense rainfall can be translated through saturated soils and land surface processes, high river flows and flooding inland. The substantial impacts on individuals, businesses and infrastructure of such events indicate a pressing need to understand better the value that might be delivered through more integrated environmental prediction. To address this need, the Met Office, NERC Centre for Ecology & Hydrology and NERC National Oceanography Centre have begun to develop the foundations of a coupled high resolution probabilistic forecast system for the UK at km-scale. This links together existing model components of the atmosphere, coastal ocean, land surface and hydrology. Our initial focus has been on a 2-year Prototype project to demonstrate the UK coupled prediction concept in research mode. This presentation will provide an update on UK environmental prediction activities. We will present the results from the initial implementation of an atmosphere-land-ocean coupled system and discuss progress and initial results from further development to integrate wave interactions. We will discuss future directions and opportunities for collaboration in environmental prediction, and the challenges to realise the potential of integrated regional coupled forecasting for improving predictions and applications.
Climate warming, marine protected areas and the ocean-scale integrity of coral reef ecosystems.
Graham, Nicholas A J; McClanahan, Tim R; MacNeil, M Aaron; Wilson, Shaun K; Polunin, Nicholas V C; Jennings, Simon; Chabanet, Pascale; Clark, Susan; Spalding, Mark D; Letourneur, Yves; Bigot, Lionel; Galzin, René; Ohman, Marcus C; Garpe, Kajsa C; Edwards, Alasdair J; Sheppard, Charles R C
2008-08-27
Coral reefs have emerged as one of the ecosystems most vulnerable to climate variation and change. While the contribution of a warming climate to the loss of live coral cover has been well documented across large spatial and temporal scales, the associated effects on fish have not. Here, we respond to recent and repeated calls to assess the importance of local management in conserving coral reefs in the context of global climate change. Such information is important, as coral reef fish assemblages are the most species dense vertebrate communities on earth, contributing critical ecosystem functions and providing crucial ecosystem services to human societies in tropical countries. Our assessment of the impacts of the 1998 mass bleaching event on coral cover, reef structural complexity, and reef associated fishes spans 7 countries, 66 sites and 26 degrees of latitude in the Indian Ocean. Using Bayesian meta-analysis we show that changes in the size structure, diversity and trophic composition of the reef fish community have followed coral declines. Although the ocean scale integrity of these coral reef ecosystems has been lost, it is positive to see the effects are spatially variable at multiple scales, with impacts and vulnerability affected by geography but not management regime. Existing no-take marine protected areas still support high biomass of fish, however they had no positive affect on the ecosystem response to large-scale disturbance. This suggests a need for future conservation and management efforts to identify and protect regional refugia, which should be integrated into existing management frameworks and combined with policies to improve system-wide resilience to climate variation and change.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
NASA Astrophysics Data System (ADS)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.
Planner-Based Control of Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Kortenkamp, David; Fry, Chuck; Bell, Scott
2005-01-01
The paper describes an approach to the integration of qualitative and quantitative modeling techniques for advanced life support (ALS) systems. Developing reliable control strategies that scale up to fully integrated life support systems requires augmenting quantitative models and control algorithms with the abstractions provided by qualitative, symbolic models and their associated high-level control strategies. This will allow for effective management of the combinatorics due to the integration of a large number of ALS subsystems. By focusing control actions at different levels of detail and reactivity we can use faster: simpler responses at the lowest level and predictive but complex responses at the higher levels of abstraction. In particular, methods from model-based planning and scheduling can provide effective resource management over long time periods. We describe reference implementation of an advanced control system using the IDEA control architecture developed at NASA Ames Research Center. IDEA uses planning/scheduling as the sole reasoning method for predictive and reactive closed loop control. We describe preliminary experiments in planner-based control of ALS carried out on an integrated ALS simulation developed at NASA Johnson Space Center.
Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J
2017-01-01
Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales.
Hahn, Beth A.; Dreitz, Victoria J.; George, T. Luke
2017-01-01
Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer’s sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer’s sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales. PMID:29065128
Balkányi, László
2002-01-01
To develop information systems (IS) in the changing environment of the health sector, a simple but throughout model, avoiding the techno-jargon of informatics, might be useful for the top management. A platform neutral, extensible, transparent conceptual model should be established. Limitations of current methods lead to a simple, but comprehensive mapping, in the form of a three-dimensional cube. The three 'orthogonal' views are (a) organization functionality, (b) organizational structures and (c) information technology. Each of the cube-sides is described according to its nature. This approach enables to define any kind of an IS component as a certain point/layer/domain of the cube and enables also the management to label all IS components independently form any supplier(s) and/or any specific platform. The model handles changes in organization structure, business functionality and the serving info-system independently form each other. Practical application extends to (a) planning complex, new ISs, (b) guiding development of multi-vendor, multi-site ISs, (c) supporting large-scale public procurement procedures and the contracting, implementation phase by establishing a platform neutral reference, (d) keeping an exhaustive inventory of an existing large-scale system, that handles non-tangible aspects of the IS.
Integration and segregation of large-scale brain networks during short-term task automatization
Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes
2016-01-01
The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095
A QoS adaptive multimedia transport system: design, implementation and experiences
NASA Astrophysics Data System (ADS)
Campbell, Andrew; Coulson, Geoff
1997-03-01
The long awaited `new environment' of high speed broadband networks and multimedia applications is fast becoming a reality. However, few systems in existence today, whether they be large scale pilots or small scale test-beds in research laboratories, offer a fully integrated and flexible environment where multimedia applications can maximally exploit the quality of service (QoS) capabilities of supporting networks and end-systems. In this paper we describe the implementation of an adaptive transport system that incorporates a QoS oriented API and a range of mechanisms to assist applications in exploiting QoS and adapting to fluctuations in QoS. The system, which is an instantiation of the Lancaster QoS Architecture, is implemented in a multi ATM switch network environment with Linux based PC end systems and continuous media file servers. A performance evaluation of the system configured to support video-on-demand application scenario is presented and discussed. Emphasis is placed on novel features of the system and on their integration into a complete prototype. The most prominent novelty of our design is a `distributed QoS adaptation' scheme which allows applications to delegate to the system responsibility for augmenting and reducing the perceptual quality of video and audio flows when resource availability increases or decreases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel
Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less
Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel; ...
2017-07-24
Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less
Segregated Systems of Human Brain Networks.
Wig, Gagan S
2017-12-01
The organization of the brain network enables its function. Evaluation of this organization has revealed that large-scale brain networks consist of multiple segregated subnetworks of interacting brain areas. Descriptions of resting-state network architecture have provided clues for understanding the functional significance of these segregated subnetworks, many of which correspond to distinct brain systems. The present report synthesizes accumulating evidence to reveal how maintaining segregated brain systems renders the human brain network functionally specialized, adaptable to task demands, and largely resilient following focal brain damage. The organizational properties that support system segregation are harmonious with the properties that promote integration across the network, but confer unique and important features to the brain network that are central to its function and behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.
2015-05-01
A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.
Operating Reserves and Wind Power Integration: An International Comparison; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, M.; Donohoo, P.; Lew, D.
2010-10-01
This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.
Design principles for achieving integrated healthcare information systems.
Jensen, Tina Blegind
2013-03-01
Achieving integrated healthcare information systems has become a common goal for many countries in their pursuit of obtaining coordinated and comprehensive healthcare services. This article focuses on how a small local project termed 'Standardized pull of patient data' expanded and is now used on a large scale providing a majority of hospitals, general practitioners and citizens across Denmark with the possibility of accessing healthcare data from different electronic patient record systems and other systems. I build on design theory for information infrastructures, as presented by Hanseth and Lyytinen, to examine the design principles that facilitated this smallscale project to expand and become widespread. As a result of my findings, I outline three lessons learned that emphasize: (i) principles of flexibility, (ii) expansion from the installed base through modular strategies and (iii) identification of key healthcare actors to provide them with immediate benefits.
VME rollback hardware for time warp multiprocessor systems
NASA Technical Reports Server (NTRS)
Robb, Michael J.; Buzzell, Calvin A.
1992-01-01
The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.
Graphene-Si heterogeneous nanotechnology
NASA Astrophysics Data System (ADS)
Akinwande, Deji; Tao, Li
2013-05-01
It is widely envisioned that graphene, an atomic sheet of carbon that has generated very broad interest has the largest prospects for flexible smart systems and for integrated graphene-silicon (G-Si) heterogeneous very large-scale integrated (VLSI) nanoelectronics. In this work, we focus on the latter and elucidate the research progress that has been achieved for integration of graphene with Si-CMOS including: wafer-scale graphene growth by chemical vapor deposition on Cu/SiO2/Si substrates, wafer-scale graphene transfer that afforded the fabrication of over 10,000 devices, wafer-scalable mitigation strategies to restore graphene's device characteristics via fluoropolymer interaction, and demonstrations of graphene integrated with commercial Si- CMOS chips for hybrid nanoelectronics and sensors. Metrology at the wafer-scale has led to the development of custom Raman processing software (GRISP) now available on the nanohub portal. The metrology reveals that graphene grown on 4-in substrates have monolayer quality comparable to exfoliated flakes. At room temperature, the high-performance passivated graphene devices on SiO2/Si can afford average mobilities 3000cm2/V-s and gate modulation that exceeds an order of magnitude. The latest growth research has yielded graphene with high mobilities greater than 10,000cm2/V-s on oxidized silicon. Further progress requires track compatible graphene-Si integration via wafer bonding in order to translate graphene research from basic to applied research in commercial R and D laboratories to ultimately yield a viable nanotechnology.
Cloud-enabled large-scale land surface model simulations with the NASA Land Information System
NASA Astrophysics Data System (ADS)
Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.
2017-12-01
Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and describe the potential deployment of this information technology with other NASA applications.
The Dynamical Balance of the Brain at Rest
Deco, Gustavo; Corbetta, Maurizio
2014-01-01
We review evidence that spontaneous, i.e. not stimulus- or task-driven, activity in the brain is not noise, but orderly organized at the level of large scale systems in a series of functional networks that maintain at all times a high level of coherence. These networks of spontaneous activity correlation or resting state networks (RSN) are closely related to the underlying anatomical connectivity, but their topography is also gated by the history of prior task activation. Network coherence does not depend on covert cognitive activity, but its strength and integrity relates to behavioral performance. Some RSN are functionally organized as dynamically competing systems both at rest and during tasks. Computational studies show that one of such dynamics, the anti-correlation between networks, depends on noise driven transitions between different multi-stable cluster synchronization states. These multi-stable states emerge because of transmission delays between regions that are modeled as coupled oscillators systems. Large-scale systems dynamics are useful for keeping different functional sub-networks in a state of heightened competition, which can be stabilized and fired by even small modulations of either sensory or internal signals. PMID:21196530
a Cumulus Parameterization Study with Special Attention to the Arakawa-Schubert Scheme
NASA Astrophysics Data System (ADS)
Kao, Chih-Yue Jim
Arakawa and Schubert (1974) developed a cumulus parameterization scheme in a framework that conceptually divides the mutual interaction of the cumulus convection and large-scale disturbance into the categories of large -scale budget requirements and the quasi-equilibrium assumption of cloud work function. We have applied the A-S scheme through a semi-prognostic approach to two different data sets: one is for an intense tropical cloud band event; the other is for tropical composite easterly wave disturbances. Both were observed in GATE. The cloud heating and drying effects predicted by the Arakawa-Schubert scheme are found to agree rather well with the observations. However, it is also found that the Arakawa-Schubert scheme underestimates both condensation and evaporation rates substantially when compared with the cumulus ensemble model results (Soong and Tao, 1980; Tao, 1983). An inclusion of the downdraft effects, as formulated by Johnson (1976), appears to alleviate this deficiency. In order to examine how the Arakawa-Schubert scheme works in a fully prognostic problem, a simulation of the evolution and structure of the tropical cloud band, mentioned above, under the influence of an imposed large-scale low -level forcing has been made, using a two-dimensional hydrostatic model with the inclusion of the Arakawa-Schubert scheme. Basically, the model result indicates that the meso-scale convective system is driven by the excess of the convective heating derived from the Arakawa-Schubert scheme over the adiabatic cooling due to the imposed large-scale lifting and induced meso-scale upward motion. However, as the convective system develops, the adiabatic warming due to the subsidence outside the cloud cluster gradually accumulates into a secondary temperature anomaly which subsequently reduces the original temperature contrast and inhibits the further development of the convective system. A 24 hour integration shows that the model is capable of simulating many important features such as the life cycle, intensity of circulation, and rainfall rates.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Nanowire nanocomputer as a finite-state machine.
Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F; Ellenbogen, James C; Lieber, Charles M
2014-02-18
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom-up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future.
Nanowire nanocomputer as a finite-state machine
Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F.; Ellenbogen, James C.; Lieber, Charles M.
2014-01-01
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom–up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future. PMID:24469812
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
Multiplexed, High Density Electrophysiology with Nanofabricated Neural Probes
Du, Jiangang; Blanche, Timothy J.; Harrison, Reid R.; Lester, Henry A.; Masmanidis, Sotiris C.
2011-01-01
Extracellular electrode arrays can reveal the neuronal network correlates of behavior with single-cell, single-spike, and sub-millisecond resolution. However, implantable electrodes are inherently invasive, and efforts to scale up the number and density of recording sites must compromise on device size in order to connect the electrodes. Here, we report on silicon-based neural probes employing nanofabricated, high-density electrical leads. Furthermore, we address the challenge of reading out multichannel data with an application-specific integrated circuit (ASIC) performing signal amplification, band-pass filtering, and multiplexing functions. We demonstrate high spatial resolution extracellular measurements with a fully integrated, low noise 64-channel system weighing just 330 mg. The on-chip multiplexers make possible recordings with substantially fewer external wires than the number of input channels. By combining nanofabricated probes with ASICs we have implemented a system for performing large-scale, high-density electrophysiology in small, freely behaving animals that is both minimally invasive and highly scalable. PMID:22022568
Small-scale monitoring - can it be integrated with large-scale programs?
C. M. Downes; J. Bart; B. T. Collins; B. Craig; B. Dale; E. H. Dunn; C. M. Francis; S. Woodley; P. Zorn
2005-01-01
There are dozens of programs and methodologies for monitoring and inventory of bird populations, differing in geographic scope, species focus, field methods and purpose. However, most of the emphasis has been placed on large-scale monitoring programs. People interested in assessing bird numbers and long-term trends in small geographic areas such as a local birding area...
Nanowire active-matrix circuitry for low-voltage macroscale artificial skin.
Takei, Kuniharu; Takahashi, Toshitake; Ho, Johnny C; Ko, Hyunhyub; Gillies, Andrew G; Leu, Paul W; Fearing, Ronald S; Javey, Ali
2010-10-01
Large-scale integration of high-performance electronic components on mechanically flexible substrates may enable new applications in electronics, sensing and energy. Over the past several years, tremendous progress in the printing and transfer of single-crystalline, inorganic micro- and nanostructures on plastic substrates has been achieved through various process schemes. For instance, contact printing of parallel arrays of semiconductor nanowires (NWs) has been explored as a versatile route to enable fabrication of high-performance, bendable transistors and sensors. However, truly macroscale integration of ordered NW circuitry has not yet been demonstrated, with the largest-scale active systems being of the order of 1 cm(2) (refs 11,15). This limitation is in part due to assembly- and processing-related obstacles, although larger-scale integration has been demonstrated for randomly oriented NWs (ref. 16). Driven by this challenge, here we demonstrate macroscale (7×7 cm(2)) integration of parallel NW arrays as the active-matrix backplane of a flexible pressure-sensor array (18×19 pixels). The integrated sensor array effectively functions as an artificial electronic skin, capable of monitoring applied pressure profiles with high spatial resolution. The active-matrix circuitry operates at a low operating voltage of less than 5 V and exhibits superb mechanical robustness and reliability, without performance degradation on bending to small radii of curvature (2.5 mm) for over 2,000 bending cycles. This work presents the largest integration of ordered NW-array active components, and demonstrates a model platform for future integration of nanomaterials for practical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sola, M.; Haakon Nordby, L.; Dailey, D.V.
High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team`s ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sola, M.; Haakon Nordby, L.; Dailey, D.V.
High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team's ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less
Nkhata, Bimo Abraham; Breen, Charles
2010-02-01
This article discusses how the concept of integrated learning systems provides a useful means of exploring the functional linkages between the governance and management of public protected areas. It presents a conceptual framework of an integrated learning system that explicitly incorporates learning processes in governance and management subsystems. The framework is premised on the assumption that an understanding of an integrated learning system is essential if we are to successfully promote learning across multiple scales as a fundamental component of adaptability in the governance and management of protected areas. The framework is used to illustrate real-world situations that reflect the nature and substance of the linkages between governance and management. Drawing on lessons from North America and Africa, the article demonstrates that the establishment and maintenance of an integrated learning system take place in a complex context which links elements of governance learning and management learning subsystems. The degree to which the two subsystems are coupled influences the performance of an integrated learning system and ultimately adaptability. Such performance is largely determined by how integrated learning processes allow for the systematic testing of societal assumptions (beliefs, values, and public interest) to enable society and protected area agencies to adapt and learn in the face of social and ecological change. It is argued that an integrated perspective provides a potentially useful framework for explaining and improving shared understanding around which the concept of adaptability is structured and implemented.
Big Data in Medicine is Driving Big Changes
Verspoor, K.
2014-01-01
Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716
Scalable Manufacturing of Solderable and Stretchable Physiologic Sensing Systems.
Kim, Yun-Soung; Lu, Jesse; Shih, Benjamin; Gharibans, Armen; Zou, Zhanan; Matsuno, Kristen; Aguilera, Roman; Han, Yoonjae; Meek, Ann; Xiao, Jianliang; Tolley, Michael T; Coleman, Todd P
2017-10-01
Methods for microfabrication of solderable and stretchable sensing systems (S4s) and a scaled production of adhesive-integrated active S4s for health monitoring are presented. S4s' excellent solderability is achieved by the sputter-deposited nickel-vanadium and gold pad metal layers and copper interconnection. The donor substrate, which is modified with "PI islands" to become selectively adhesive for the S4s, allows the heterogeneous devices to be integrated with large-area adhesives for packaging. The feasibility for S4-based health monitoring is demonstrated by developing an S4 integrated with a strain gauge and an onboard optical indication circuit. Owing to S4s' compatibility with the standard printed circuit board assembly processes, a variety of commercially available surface mount chip components, such as the wafer level chip scale packages, chip resistors, and light-emitting diodes, can be reflow-soldered onto S4s without modifications, demonstrating the versatile and modular nature of S4s. Tegaderm-integrated S4 respiration sensors are tested for robustness for cyclic deformation, maximum stretchability, durability, and biocompatibility for multiday wear time. The results of the tests and demonstration of the respiration sensing indicate that the adhesive-integrated S4s can provide end users a way for unobtrusive health monitoring. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chonis, Taylor Steven
In the upcoming era of extremely large ground-based astronomical telescopes, the design of wide-field spectroscopic survey instrumentation has become increasingly complex due to the linear growth of instrument pupil size with telescope diameter for a constant spectral resolving power. The upcoming Visible Integral field Replicable Unit Spectrograph (VIRUS), a baseline array of 150 copies of a simple integral field spectrograph that will be fed by 3:36 x 104 optical fibers on the upgraded Hobby-Eberly Telescope (HET) at McDonald Observatory, represents one of the first uses of large-scale replication to break the relationship between instrument pupil size and telescope diameter. By dividing the telescope's field of view between a large number of smaller and more manageable instruments, the total information grasp of a traditional monolithic survey spectrograph can be achieved at a fraction of the cost and engineering complexity. To highlight the power of this method, VIRUS will execute the HET Dark Energy Experiment (HETDEX) and survey & 420 degrees2 of sky to an emission line flux limit of ˜ 10-17 erg s-1 cm -2 to detect ˜ 106 Lyman-alpha emitting galaxies (LAEs) as probes of large-scale structure at redshifts of 1:9 < z < 3:5. HETDEX will precisely measure the evolution of dark energy at that epoch, and will simultaneously amass an LAE sample that will be unprecedented for extragalactic astrophysics at the redshifts of interest. Large-scale replication has clear advantages to increasing the total information grasp of a spectrograph, but there are also challenges. In this dissertation, two of these challenges with respect to VIRUS are detailed. First, the VIRUS cryogenic system is discussed, specifically the design and tests of a novel thermal connector and internal camera croygenic components that link the 150 charge-coupled device detectors to the instrument's liquid nitrogen distribution system. Second, the design, testing, and mass production of the suite of volume phase holographic (VPH) diffraction gratings for VIRUS is presented, which highlights the challenge and success associated with producing of a very large number of highly customized optical elements whose performance is crucial to meeting the efficiency requirements of the spectrograph system. To accommodate VIRUS, the HET is undergoing a substantial wide-field upgrade to increase its field of view to 22' in diameter. The previous HET facility Low Resolution Spectrograph (LRS), which was directly fed by the telescope's previous spherical aberration corrector, must be removed from the prime focus instrument package as a result of the telescope upgrades and instead be fiber-coupled to the telescope focal plane. For a similar cost as modifying LRS to accommodate these changes, a new second generation instrument (LRS2) will be based on the VIRUS unit spectrograph. The design, operational concept, construction, and laboratory testing and characterization of LRS2 is the primary focus of this dissertation, which highlights the benefits of leveraging the large engineering investment, economies of scale, and laboratory and observatory infrastructure associated with the massively replicated VIRUS instrument. LRS2 will provide integral field spectroscopy for a seeing-limited field of 12" x 6". The multiplexed VIRUS framework facilitates broad wavelength coverage from 370 nm to 1.0 mum spread between two dual-channel spectrographs at a moderate spectral resolving power of R ≈ 2000. The design departures from VIRUS are presented, including the novel integral field unit, VPH grism dispersers, and various optical changes for accommodating the broadband wavelength coverage. Laboratory testing has verified that LRS2 largely meets its image quality specification and is nearly ready for delivery to the HET where its final verification and validation tasks will be executed. LRS2 will enable the continuation of most legacy LRS science programs and provide improved capability for future investigations. (Abstract shortened by ProQuest.).
Channeling of multikilojoule high-intensity laser beams in an inhomogeneous plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivancic, S.; Haberberger, D.; Habara, H.
Channeling experiments were performed that demonstrate the transport of high-intensity (>10¹⁸ W/cm²), multikilojoule laser light through a millimeter-sized, inhomogeneous (~300-μm density scale length) laser produced plasma up to overcritical density, which is an important step forward for the fast-ignition concept. The background plasma density and the density depression inside the channel were characterized with a novel optical probe system. The channel progression velocity was measured, which agrees well with theoretical predictions based on large scale particle-in-cell simulations, confirming scaling laws for the required channeling laser energy and laser pulse duration, which are important parameters for future integrated fast-ignition channeling experiments.
Cai, Long-Fei; Zhu, Ying; Du, Guan-Sheng; Fang, Qun
2012-01-03
We described a microfluidic chip-based system capable of generating droplet array with a large scale concentration gradient by coupling flow injection gradient technique with droplet-based microfluidics. Multiple modules including sample injection, sample dispersion, gradient generation, droplet formation, mixing of sample and reagents, and online reaction within the droplets were integrated into the microchip. In the system, nanoliter-scale sample solution was automatically injected into the chip under valveless flow injection analysis mode. The sample zone was first dispersed in the microchannel to form a concentration gradient along the axial direction of the microchannel and then segmented into a linear array of droplets by immiscible oil phase. With the segmentation and protection of the oil phase, the concentration gradient profile of the sample was preserved in the droplet array with high fidelity. With a single injection of 16 nL of sample solution, an array of droplets with concentration gradient spanning 3-4 orders of magnitude could be generated. The present system was applied in the enzyme inhibition assay of β-galactosidase to preliminarily demonstrate its potential in high throughput drug screening. With a single injection of 16 nL of inhibitor solution, more than 240 in-droplet enzyme inhibition reactions with different inhibitor concentrations could be performed with an analysis time of 2.5 min. Compared with multiwell plate-based screening systems, the inhibitor consumption was reduced 1000-fold. © 2011 American Chemical Society
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
Sequencing Data Discovery and Integration for Earth System Science with MetaSeek
NASA Astrophysics Data System (ADS)
Hoarfrost, A.; Brown, N.; Arnosti, C.
2017-12-01
Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.
Badenes, Sara M; Fernandes, Tiago G; Rodrigues, Carlos A V; Diogo, Maria Margarida; Cabral, Joaquim M S
2016-09-20
Human pluripotent stem cells (hPSC) have attracted a great attention as an unlimited source of cells for cell therapies and other in vitro biomedical applications such as drug screening, toxicology assays and disease modeling. The implementation of scalable culture platforms for the large-scale production of hPSC and their derivatives is mandatory to fulfill the requirement of obtaining large numbers of cells for these applications. Microcarrier technology has been emerging as an effective approach for the large scale ex vivo hPSC expansion and differentiation. This review presents recent achievements in hPSC microcarrier-based culture systems and discusses the crucial aspects that influence the performance of these culture platforms. Recent progress includes addressing chemically-defined culture conditions for manufacturing of hPSC and their derivatives, with the development of xeno-free media and microcarrier coatings to meet good manufacturing practice (GMP) quality requirements. Finally, examples of integrated platforms including hPSC expansion and directed differentiation to specific lineages are also presented in this review. Copyright © 2016 Elsevier B.V. All rights reserved.
Integral stormwater management master plan and design in an ecological community.
Che, Wu; Zhao, Yang; Yang, Zheng; Li, Junqi; Shi, Man
2014-09-01
Urban stormwater runoff nearly discharges directly into bodies of water through gray infrastructure in China, such as sewers, impermeable ditches, and pump stations. As urban flooding, water shortage, and other environment problems become serious, integrated water environment management is becoming increasingly complex and challenging. At more than 200ha, the Oriental Sun City community is a large retirement community located in the eastern side of Beijing. During the beginning of its construction, the project faced a series of serious water environment crises such as eutrophication, flood risk, water shortage, and high maintenance costs. To address these issues, an integral stormwater management master plan was developed based on the concept of low impact development (LID). A large number of LID and green stormwater infrastructure (GSI) approaches were designed and applied in the community to replace traditional stormwater drainage systems completely. These approaches mainly included bioretention (which captured nearly 85th percentile volume of the annual runoff in the site, nearly 5.4×10(5)m(3) annually), swales (which functioned as a substitute for traditional stormwater pipes), waterscapes, and stormwater wetlands. Finally, a stormwater system plan was proposed by integrating with the gray water system, landscape planning, an architectural master plan, and related consultations that supported the entire construction period. After more than 10 years of planning, designing, construction, and operation, Oriental Sun City has become one of the earliest modern large-scale LID communities in China. Moreover, the project not only addressed the crisis efficiently and effectively, but also yielded economic and ecological benefits. Copyright © 2014. Published by Elsevier B.V.
Power feasibility of implantable digital spike sorting circuits for neural prosthetic systems.
Zumsteg, Zachary S; Kemere, Caleb; O'Driscoll, Stephen; Santhanam, Gopal; Ahmed, Rizwan E; Shenoy, Krishna V; Meng, Teresa H
2005-09-01
A new class of neural prosthetic systems aims to assist disabled patients by translating cortical neural activity into control signals for prosthetic devices. Based on the success of proof-of-concept systems in the laboratory, there is now considerable interest in increasing system performance and creating implantable electronics for use in clinical systems. A critical question that impacts system performance and the overall architecture of these systems is whether it is possible to identify the neural source of each action potential (spike sorting) in real-time and with low power. Low power is essential both for power supply considerations and heat dissipation in the brain. In this paper we report that state-of-the-art spike sorting algorithms are not only feasible using modern complementary metal oxide semiconductor very large scale integration processes, but may represent the best option for extracting large amounts of data in implantable neural prosthetic interfaces.
Very Large Scale Integrated Circuits for Military Systems.
1981-01-01
ABBREVIATIONS A/D Analog-to-digital C AGC Automatic Gain Control A A/J Anti-jam ASP Advanced Signal Processor AU Arithmetic Units C.AD Computer-Aided...ESM) equipments (Ref. 23); in lieu of an adequate automatic proces- sing capability, the function is now performed manually (Ref. 24), which involves...a human operator, displays, etc., and a sacrifice in performance (acquisition speed, saturation signal density). Various automatic processing
Local, Regional and Large Scale Integrated Networks
1975-08-01
e.g., [Abramson, 1970, 1973, Kleinrock, 1973, Kleinrock, 1975, Roberts, 1973, Gitman , 19] , have shown that this "fixed assignment" of the...Abramson, 1973, ■- Kleinrock, 1973]), or intentionally avoid the issue of packet routing by proper assumptions [ Gitman , 1975]. The issue of...Communications Sys- tems," Memorandum RM-4781-PR, The Rand Corpora- tion, February 1966. Frank, H., I. Gitman , R. Van Slyke, "Pc-jket I.adio System
Alavash, Mohsen; Lim, Sung-Joo; Thiel, Christiane; Sehm, Bernhard; Deserno, Lorenz; Obleser, Jonas
2018-05-15
Dopamine underlies important aspects of cognition, and has been suggested to boost cognitive performance. However, how dopamine modulates the large-scale cortical dynamics during cognitive performance has remained elusive. Using functional MRI during a working memory task in healthy young human listeners, we investigated the effect of levodopa (l-dopa) on two aspects of cortical dynamics, blood oxygen-level-dependent (BOLD) signal variability and the functional connectome of large-scale cortical networks. We here show that enhanced dopaminergic signaling modulates the two potentially interrelated aspects of large-scale cortical dynamics during cognitive performance, and the degree of these modulations is able to explain inter-individual differences in l-dopa-induced behavioral benefits. Relative to placebo, l-dopa increased BOLD signal variability in task-relevant temporal, inferior frontal, parietal and cingulate regions. On the connectome level, however, l-dopa diminished functional integration across temporal and cingulo-opercular regions. This hypo-integration was expressed as a reduction in network efficiency and modularity in more than two thirds of the participants and to different degrees. Hypo-integration co-occurred with relative hyper-connectivity in paracentral lobule and precuneus, as well as posterior putamen. Both, l-dopa-induced BOLD signal variability modulation and functional connectome modulations proved predictive of an individual's l-dopa-induced benefits in behavioral performance, namely response speed and perceptual sensitivity. Lastly, l-dopa-induced modulations of BOLD signal variability were correlated with l-dopa-induced modulation of nodal connectivity and network efficiency. Our findings underline the role of dopamine in maintaining the dynamic range of, and communication between, cortical systems, and their explanatory power for inter-individual differences in benefits from dopamine during cognitive performance. Copyright © 2018 Elsevier Inc. All rights reserved.
Mapping the integrated Sachs-Wolfe effect
NASA Astrophysics Data System (ADS)
Manzotti, A.; Dodelson, S.
2014-12-01
On large scales, the anisotropies in the cosmic microwave background (CMB) reflect not only the primordial density field but also the energy gain when photons traverse decaying gravitational potentials of large scale structure, what is called the integrated Sachs-Wolfe (ISW) effect. Decomposing the anisotropy signal into a primordial piece and an ISW component, the main secondary effect on large scales, is more urgent than ever as cosmologists strive to understand the Universe on those scales. We present a likelihood technique for extracting the ISW signal combining measurements of the CMB, the distribution of galaxies, and maps of gravitational lensing. We test this technique with simulated data showing that we can successfully reconstruct the ISW map using all the data sets together. Then we present the ISW map obtained from a combination of real data: the NRAO VLA sky survey (NVSS) galaxy survey, temperature anisotropies, and lensing maps made by the Planck satellite. This map shows that, with the data sets used and assuming linear physics, there is no evidence, from the reconstructed ISW signal in the Cold Spot region, for an entirely ISW origin of this large scale anomaly in the CMB. However a large scale structure origin from low redshift voids outside the NVSS redshift range is still possible. Finally we show that future surveys, thanks to a better large scale lensing reconstruction will be able to improve the reconstruction signal to noise which is now mainly coming from galaxy surveys.
GenomeDiagram: a python package for the visualization of large-scale genomic data.
Pritchard, Leighton; White, Jennifer A; Birch, Paul R J; Toth, Ian K
2006-03-01
We present GenomeDiagram, a flexible, open-source Python module for the visualization of large-scale genomic, comparative genomic and other data with reference to a single chromosome or other biological sequence. GenomeDiagram may be used to generate publication-quality vector graphics, rastered images and in-line streamed graphics for webpages. The package integrates with datatypes from the BioPython project, and is available for Windows, Linux and Mac OS X systems. GenomeDiagram is freely available as source code (under GNU Public License) at http://bioinf.scri.ac.uk/lp/programs.html, and requires Python 2.3 or higher, and recent versions of the ReportLab and BioPython packages. A user manual, example code and images are available at http://bioinf.scri.ac.uk/lp/programs.html.
Electronic shift register memory based on molecular electron-transfer reactions
NASA Technical Reports Server (NTRS)
Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.
1989-01-01
The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.
Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth
2012-01-01
The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less
Assuring Quality in Large-Scale Online Course Development
ERIC Educational Resources Information Center
Parscal, Tina; Riemer, Deborah
2010-01-01
Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…
Taking Stock: Existing Resources for Assessing a New Vision of Science Learning
ERIC Educational Resources Information Center
Alonzo, Alicia C.; Ke, Li
2016-01-01
A new vision of science learning described in the "Next Generation Science Standards"--particularly the science and engineering practices and their integration with content--pose significant challenges for large-scale assessment. This article explores what might be learned from advances in large-scale science assessment and…
Tuan, Nguyen Thanh; Alayon, Silvia; Do, Tran Thanh; Ngan, Tran Thi; Hajeebhoy, Nemat
2015-01-01
Little information is available about how to build a monitoring system to measure the output of preventive nutrition interventions, such as counselling on infant and young child feeding. This paper describes the Alive & Thrive Vietnam (A&T) project experience in nesting a large-scale project monitoring system into the existing public health information system (e.g. using the system and resources), and in using monitoring data to strengthen service delivery in 15 provinces with A&T franchises. From January 2012 to April 2014, the 780 A&T franchises provided 1,700,000 counselling contacts (~3/4 by commune franchises). In commune franchises in April 2014, 80% of mothers who were pregnant or with children under two years old had been to the counselling service at least one time, and 87% of clients had been to the service earlier. Monitoring data are used to track the progress of the project, make decisions, provide background for a costing study and advocate for the integration of nutrition counselling indicators into the health information system nationwide. With careful attention to the needs of stakeholders at multiple levels, clear data quality assurance measures and strategic feedback mechanisms, it is feasible to monitor the scale-up of nutrition programmes through the existing routine health information system.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.