Sample records for large system approach

  1. A Lean Approach to Improving SE Visibility in Large Operational Systems Evolution

    DTIC Science & Technology

    2013-06-01

    large health care system of systems. To enhance both visibility and flow, the approach utilizes visualization techniques, pull-scheduling processes...development processes. This paper describes an example implementation of the concept in a large health care system of systems. To enhance both visibility...and then provides the results to the requestor as soon as available. Hospital System Information Support Development The health care SoS is a set

  2. Control of solar energy systems

    NASA Astrophysics Data System (ADS)

    Sizov, Iu. M.; Zakhidov, R. A.; Baranov, V. G.

    Two approaches to the control of large solar energy systems, i.e., programmed control and control systems relying on the use of orientation transducers and feedback, are briefly reviewed, with particular attention given to problems associated with these control systems. A new control system for large solar power plants is then proposed which is based on a combination of these approaches. The general design of the control system is shown and its principle of operation described. The efficiency and cost effectiveness of the approach proposed here are demonstrated.

  3. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    NASA Astrophysics Data System (ADS)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  4. Direct evaluation of free energy for large system through structure integration approach.

    PubMed

    Takeuchi, Kazuhito; Tanaka, Ryohei; Yuge, Koretaka

    2015-09-30

    We propose a new approach, 'structure integration', enabling direct evaluation of configurational free energy for large systems. The present approach is based on the statistical information of lattice. Through first-principles-based simulation, we find that the present method evaluates configurational free energy accurately in disorder states above critical temperature.

  5. Large space systems technology, 1981. [conferences

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems. Specifically, program status, structures, materials, and analyses, and control of large space systems are addressed.

  6. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  7. The 1980 Large space systems technology. Volume 2: Base technology

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    Technology pertinent to large antenna systems, technology related to large space platform systems, and base technology applicable to both antenna and platform systems are discussed. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. A total systems approach including controls, platforms, and antennas is presented as a cohesive, programmatic plan for large space systems.

  8. Large Space Systems Technology, Part 2, 1981

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    Four major areas of interest are covered: technology pertinent to large antenna systems; technology related to the control of large space systems; basic technology concerning structures, materials, and analyses; and flight technology experiments. Large antenna systems and flight technology experiments are described. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. These research studies represent state-of-the art technology that is necessary for the development of large space systems. A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems.

  9. Combination of large and small basis sets in electronic structure calculations on large systems

    NASA Astrophysics Data System (ADS)

    Røeggen, Inge; Gao, Bin

    2018-04-01

    Two basis sets—a large and a small one—are associated with each nucleus of the system. Each atom has its own separate one-electron basis comprising the large basis set of the atom in question and the small basis sets for the partner atoms in the complex. The perturbed atoms in molecules and solids model is at core of the approach since it allows for the definition of perturbed atoms in a system. It is argued that this basis set approach should be particularly useful for periodic systems. Test calculations are performed on one-dimensional arrays of H and Li atoms. The ground-state energy per atom in the linear H array is determined versus bond length.

  10. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  11. Formal Verification of Large Software Systems

    NASA Technical Reports Server (NTRS)

    Yin, Xiang; Knight, John

    2010-01-01

    We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain

  12. The impact of user- and system-initiated personalization on the user experience at large sports events.

    PubMed

    Sun, Xu; May, Andrew; Wang, Qingfeng

    2016-05-01

    This article describes an experimental study investigating the impact on user experience of two approaches of personalization of content provided on a mobile device, for spectators at large sports events. A lab-based experiment showed that a system-driven approach to personalization was generally preferable, but that there were advantages to retaining some user control over the process. Usability implications for a hybrid approach, and design implications are discussed, with general support for countermeasures designed to overcome recognised limitations of adaptive systems. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. An LQR controller design approach for a Large Gap Magnetic Suspension System (LGMSS)

    NASA Technical Reports Server (NTRS)

    Groom, Nelson J.; Schaffner, Philip R.

    1990-01-01

    Two control approaches for a Large Gap Magnetic Suspension System (LGMSS) are investigated and numerical results are presented. The approaches are based on Linear Quadratic Regulator (LQR) control theory and include a nonzero set point regulator with constant disturbance input and an integral feedback regulator. The LGMSS provides five degree of freedom control of a cylindrical suspended element which is composed of permanent magnet material. The magnetic actuators are air core electromagnets mounted in a planar way.

  14. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  15. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  16. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  17. Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions

    DTIC Science & Technology

    2014-12-05

    test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions

  18. The MICRO-BOSS scheduling system: Current status and future efforts

    NASA Technical Reports Server (NTRS)

    Sadeh, Norman M.

    1993-01-01

    In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule, and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory.

  19. Optimal decentralized feedback control for a truss structure

    NASA Technical Reports Server (NTRS)

    Cagle, A.; Ozguner, U.

    1989-01-01

    One approach to the decentralized control of large flexible space structures involves the design of controllers for the substructures of large systems and their subsequent application to the entire coupled system. This approach is presently developed for the case of active vibration damping on an experimental large struss structure. The isolated boundary loading method is used to define component models by FEM; component controllers are designed using an interlocking control concept which minimizes the motion of the boundary nodes, thereby reducing the exchange of mechanical disturbances among components.

  20. Determination of aerodynamic sensitivity coefficients based on the three-dimensional full potential equation

    NASA Technical Reports Server (NTRS)

    Elbanna, Hesham M.; Carlson, Leland A.

    1992-01-01

    The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.

  1. Systems Biology and Mode of Action Based Risk Assessment.

    EPA Science Inventory

    The application of systems biology approaches has greatly increased in the past decade largely as a consequence of the human genome project and technological advances in genomics and proteomics. Systems approaches have been used in the medical & pharmaceutical realm for diagnost...

  2. Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu

    This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less

  3. Making the most of MBSE: pragmatic model-based engineering for the SKA Telescope Manager

    NASA Astrophysics Data System (ADS)

    Le Roux, Gerhard; Bridger, Alan; MacIntosh, Mike; Nicol, Mark; Schnetler, Hermine; Williams, Stewart

    2016-08-01

    Many large projects including major astronomy projects are adopting a Model Based Systems Engineering approach. How far is it possible to get value for the effort involved in developing a model that accurately represents a significant project such as SKA? Is it possible for such a large project to ensure that high-level requirements are traceable through the various system-engineering artifacts? Is it possible to utilize the tools available to produce meaningful measures for the impact of change? This paper shares one aspect of the experience gained on the SKA project. It explores some of the recommended and pragmatic approaches developed, to get the maximum value from the modeling activity while designing the Telescope Manager for the SKA. While it is too early to provide specific measures of success, certain areas are proving to be the most helpful and offering significant potential over the lifetime of the project. The experience described here has been on the 'Cameo Systems Modeler' tool-set, supporting a SysML based System Engineering approach; however the concepts and ideas covered would potentially be of value to any large project considering a Model based approach to their Systems Engineering.

  4. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  5. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  6. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  7. Automated Induction Of Rule-Based Neural Networks

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.; Goodman, Rodney M.

    1994-01-01

    Prototype expert systems implemented in software and are functionally equivalent to neural networks set up automatically and placed into operation within minutes following information-theoretic approach to automated acquisition of knowledge from large example data bases. Approach based largely on use of ITRULE computer program.

  8. Metrication study for large space telescope

    NASA Technical Reports Server (NTRS)

    Creswick, F. A.; Weller, A. E.

    1973-01-01

    Various approaches which could be taken in developing a metric-system design for the Large Space Telescope, considering potential penalties on development cost and time, commonality with other satellite programs, and contribution to national goals for conversion to the metric system of units were investigated. Information on the problems, potential approaches, and impacts of metrication was collected from published reports on previous aerospace-industry metrication-impact studies and through numerous telephone interviews. The recommended approach to LST metrication formulated in this study cells for new components and subsystems to be designed in metric-module dimensions, but U.S. customary practice is allowed where U.S. metric standards and metric components are not available or would be unsuitable. Electrical/electronic-system design, which is presently largely metric, is considered exempt from futher metrication. An important guideline is that metric design and fabrication should in no way compromise the effectiveness of the LST equipment.

  9. The Concert system - Compiler and runtime technology for efficient concurrent object-oriented programming

    NASA Technical Reports Server (NTRS)

    Chien, Andrew A.; Karamcheti, Vijay; Plevyak, John; Sahrawat, Deepak

    1993-01-01

    Concurrent object-oriented languages, particularly fine-grained approaches, reduce the difficulty of large scale concurrent programming by providing modularity through encapsulation while exposing large degrees of concurrency. Despite these programmability advantages, such languages have historically suffered from poor efficiency. This paper describes the Concert project whose goal is to develop portable, efficient implementations of fine-grained concurrent object-oriented languages. Our approach incorporates aggressive program analysis and program transformation with careful information management at every stage from the compiler to the runtime system. The paper discusses the basic elements of the Concert approach along with a description of the potential payoffs. Initial performance results and specific plans for system development are also detailed.

  10. The Systems Revolution

    ERIC Educational Resources Information Center

    Ackoff, Russell L.

    1974-01-01

    The major organizational and social problems of our time do not lend themselves to the reductionism of traditional analytical and disciplinary approaches. They must be attacked holistically, with a comprehensive systems approach. The effective study of large-scale social systems requires the synthesis of science with the professions that use it.…

  11. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  12. An Event-driven, Value-based, Pull Systems Engineering Scheduling Approach

    DTIC Science & Technology

    2012-03-01

    engineering in rapid response environments has been difficult, particularly those where large, complex brownfield systems or systems of systems exist and...where large, complex brownfield systems or systems of systems exist and are constantly being updated with both short and long term software enhancements...2004. [13] B. Boehm, “Applying the Incremental Commitment Model to Brownfield System Development,” Proceedings, CSER, 2009. [14] A. Borshchev and A

  13. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  14. Implementing large projects in software engineering courses

    NASA Astrophysics Data System (ADS)

    Coppit, David

    2006-03-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that threaten the realism of large projects. Third, quantitative evaluation of individuals who work in groups is notoriously difficult. As a result, many software engineering courses compromise the project experience by reducing the team sizes, project scope, and risk. In this paper, we present an approach to teaching a one-semester software engineering course in which 20 to 30 students work together to construct a moderately sized (15KLOC) software system. The approach combines carefully coordinated lectures and homeworks, a hierarchical project management structure, modern communication technologies, and a web-based project tracking and individual assessment system. Our approach provides a more realistic project experience for the students, without incurring significant additional overhead for the instructor. We present our experiences using the approach the last 2 years for the software engineering course at The College of William and Mary. Although the approach has some weaknesses, we believe that they are strongly outweighed by the pedagogical benefits.

  15. Avian surveys of large geographical areas: A systematic approach

    USGS Publications Warehouse

    Scott, J.M.; Jacobi, J.D.; Ramsey, F.L.

    1981-01-01

    A multidisciplinary team approach was used to simultaneously map the distribution of birds, selected food items, and major vegetation types in 34,000- to 140,000-ha tracts in native Hawaiian forests. By using a team approach, large savings in time can be realized over attempts to conduct similar surveys of smaller scope, and a systems approach to management problems is made easier. The methods used in survey design, training observers, and documenting bird numbersand habitat descriptions are discussed in detail.

  16. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  17. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  18. Using Residential Solar PV Quote Data to Analyze the Relationship Between Installer Pricing and Firm Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Margolis, Robert

    2017-04-01

    The vast majority of U.S. residential solar PV installers are small local-scale companies, however the industry is relatively concentrated in a few large national-scale installers. We develop a novel approach using solar PV quote data to study the price behavior of large solar PV installers in the United States. Through a paired differences approach, we find that large installer quotes are about higher, on average, than non-large installer quotes made to the same customer. The difference is statistically significant and robust after controlling for factors such as system size, equipment quality, and time effects. The results suggest that low pricesmore » are not the primary value proposition of large installer systems. We explore several hypotheses for this finding, including that large installers are able to exercise some market power and/or earn returns from reputations.« less

  19. Applying Organization Development to Coast Guard Affirmative Action for Black Officers.

    DTIC Science & Technology

    1979-12-01

    systems approach in the HRM area which includes significant attention to equal opportunity and affirmative action issues. Its present affirmative action...systems approach to the most funda- mental objective of managers - the acquisition and development of people - is needed, not only to meet soci.i and legal...strategy will not produce change, it will largely maintain the status quo. A total systems approach offers the hope of changing the present pattern

  20. Radio Ranging System for Guidance of Approaching Spacecraft

    NASA Technical Reports Server (NTRS)

    Manikonda, Vikram; vanDoom, Eric

    2008-01-01

    A radio communication and ranging system has been proposed for determining the relative position and orientations of two approaching spacecraft to provide guidance for docking maneuvers. On Earth, the system could be used similarly for guiding approaching aircraft and for automated positioning of large, heavy objects. In principle, the basic idea is to (1) measure distances between radio transceivers on the two spacecraft and (2) compute the relative position and orientations from the measured distances.

  1. The MICRO-BOSS scheduling system: Current status and future efforts

    NASA Technical Reports Server (NTRS)

    Sadeh, Norman M.

    1992-01-01

    In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory. Current research efforts include: adaptation of MICRO-BOSS to deal with sequence-dependent setups and development of micro-opportunistic reactive scheduling techniques that will enable the system to patch the schedule in the presence of contingencies such as machine breakdowns, raw materials arriving late, job cancellations, etc.

  2. Numerical stabilization of entanglement computation in auxiliary-field quantum Monte Carlo simulations of interacting many-fermion systems.

    PubMed

    Broecker, Peter; Trebst, Simon

    2016-12-01

    In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.

  3. Fostering outcomes through education: a systems approach to collaboration and creativity.

    PubMed

    Smith, Elaine L

    2014-04-01

    Across the country, integrated health care systems continue to emerge and expand. Large multifacility organizations can present both challenges and opportunities for nursing professional development and continuing education activities. This article will explore how one large multifacility system is addressing the varied learning needs of nursing staff across the enterprise. Copyright 2014, SLACK Incorporated.

  4. Use of a Diagonal Approach to Health System Strengthening and Measles Elimination after a Large Nationwide Outbreak in Mongolia.

    PubMed

    Hagan, José E; Greiner, Ashley; Luvsansharav, Ulzii-Orshikh; Lake, Jason; Lee, Christopher; Pastore, Roberta; Takashima, Yoshihiro; Sarankhuu, Amarzaya; Demberelsuren, Sodbayar; Smith, Rachel; Park, Benjamin; Goodson, James L

    2017-12-01

    Measles is a highly transmissible infectious disease that causes serious illness and death worldwide. Efforts to eliminate measles through achieving high immunization coverage, well-performing surveillance systems, and rapid and effective outbreak response mechanisms while strategically engaging and strengthening health systems have been termed a diagonal approach. In March 2015, a large nationwide measles epidemic occurred in Mongolia, 1 year after verification of measles elimination in this country. A multidisciplinary team conducted an outbreak investigation that included a broad health system assessment, organized around the Global Health Security Agenda framework of Prevent-Detect-Respond, to provide recommendations for evidence-based interventions to interrupt the epidemic and strengthen the overall health system to prevent future outbreaks of measles and other epidemic-prone infectious threats. This investigation demonstrated the value of evaluating elements of the broader health system in investigating measles outbreaks and the need for using a diagonal approach to achieving sustainable measles elimination.

  5. Spatial operator algebra for flexible multibody dynamics

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1993-01-01

    This paper presents an approach to modeling the dynamics of flexible multibody systems such as flexible spacecraft and limber space robotic systems. A large number of degrees of freedom and complex dynamic interactions are typical in these systems. This paper uses spatial operators to develop efficient recursive algorithms for the dynamics of these systems. This approach very efficiently manages complexity by means of a hierarchy of mathematical operations.

  6. Adaptive neural network decentralized backstepping output-feedback control for nonlinear large-scale systems with time delays.

    PubMed

    Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang

    2011-07-01

    In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.

  7. A Novel Interdisciplinary Approach to Socio-Technical Complexity

    NASA Astrophysics Data System (ADS)

    Bassetti, Chiara

    The chapter presents a novel interdisciplinary approach that integrates micro-sociological analysis into computer-vision and pattern-recognition modeling and algorithms, the purpose being to tackle socio-technical complexity at a systemic yet micro-grounded level. The approach is empirically-grounded and both theoretically- and analytically-driven, yet systemic and multidimensional, semi-supervised and computable, and oriented towards large scale applications. The chapter describes the proposed approach especially as for its sociological foundations, and as applied to the analysis of a particular setting --i.e. sport-spectator crowds. Crowds, better defined as large gatherings, are almost ever-present in our societies, and capturing their dynamics is crucial. From social sciences to public safety management and emergency response, modeling and predicting large gatherings' presence and dynamics, thus possibly preventing critical situations and being able to properly react to them, is fundamental. This is where semi/automated technologies can make the difference. The work presented in this chapter is intended as a scientific step towards such an objective.

  8. The Design of Large Geothermally Powered Air-Conditioning Systems Using an Optimal Control Approach

    NASA Astrophysics Data System (ADS)

    Horowitz, F. G.; O'Bryan, L.

    2010-12-01

    The direct use of geothermal energy from Hot Sedimentary Aquifer (HSA) systems for large scale air-conditioning projects involves many tradeoffs. Aspects contributing towards making design decisions for such systems include: the inadequately known permeability and thermal distributions underground; the combinatorial complexity of selecting pumping and chiller systems to match the underground conditions to the air-conditioning requirements; the future price variations of the electricity market; any uncertainties in future Carbon pricing; and the applicable discount rate for evaluating the financial worth of the project. Expanding upon the previous work of Horowitz and Hornby (2007), we take an optimal control approach to the design of such systems. By building a model of the HSA system, the drilling process, the pumping process, and the chilling operations, along with a specified objective function, we can write a Hamiltonian for the system. Using the standard techniques of optimal control, we use gradients of the Hamiltonian to find the optimal design for any given set of permeabilities, thermal distributions, and the other engineering and financial parameters. By using this approach, optimal system designs could potentially evolve in response to the actual conditions encountered during drilling. Because the granularity of some current models is so coarse, we will be able to compare our optimal control approach to an exhaustive search of parameter space. We will present examples from the conditions appropriate for the Perth Basin of Western Australia, where the WA Geothermal Centre of Excellence is involved with two large air-conditioning projects using geothermal water from deep aquifers at 75 to 95 degrees C.

  9. Structured approaches to large-scale systems: Variational integrators for interconnected Lagrange-Dirac systems and structured model reduction on Lie groups

    NASA Astrophysics Data System (ADS)

    Parks, Helen Frances

    This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work to extend POD to structured settings. In particular, we consider systems evolving on Lie groups and make use of canonical coordinates in the reduction process. We see considerable improvement in the accuracy of the reduced model over the usual structure-agnostic POD approach.

  10. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  11. A Systematic Approach to Simulating Metabolism in Computational Toxicology. I. The Times Heuristic Modeling Framework

    EPA Science Inventory

    This paper presents a new system for automated 2D-3D migration of chemicals in large databases with conformer multiplication. The main advantages of this system are its straightforward performance, reasonable execution time, simplicity, and applicability to building large 3D che...

  12. MESA: Message-Based System Analysis Using Runtime Verification

    NASA Technical Reports Server (NTRS)

    Shafiei, Nastaran; Tkachuk, Oksana; Mehlitz, Peter

    2017-01-01

    In this paper, we present a novel approach and framework for run-time verication of large, safety critical messaging systems. This work was motivated by verifying the System Wide Information Management (SWIM) project of the Federal Aviation Administration (FAA). SWIM provides live air traffic, site and weather data streams for the whole National Airspace System (NAS), which can easily amount to several hundred messages per second. Such safety critical systems cannot be instrumented, therefore, verification and monitoring has to happen using a nonintrusive approach, by connecting to a variety of network interfaces. Due to a large number of potential properties to check, the verification framework needs to support efficient formulation of properties with a suitable Domain Specific Language (DSL). Our approach is to utilize a distributed system that is geared towards connectivity and scalability and interface it at the message queue level to a powerful verification engine. We implemented our approach in the tool called MESA: Message-Based System Analysis, which leverages the open source projects RACE (Runtime for Airspace Concept Evaluation) and TraceContract. RACE is a platform for instantiating and running highly concurrent and distributed systems and enables connectivity to SWIM and scalability. TraceContract is a runtime verication tool that allows for checking traces against properties specified in a powerful DSL. We applied our approach to verify a SWIM service against several requirements.We found errors such as duplicate and out-of-order messages.

  13. Quantum mechanical fragment methods based on partitioning atoms or partitioning coordinates.

    PubMed

    Wang, Bo; Yang, Ke R; Xu, Xuefei; Isegawa, Miho; Leverentz, Hannah R; Truhlar, Donald G

    2014-09-16

    Conspectus The development of more efficient and more accurate ways to represent reactive potential energy surfaces is a requirement for extending the simulation of large systems to more complex systems, longer-time dynamical processes, and more complete statistical mechanical sampling. One way to treat large systems is by direct dynamics fragment methods. Another way is by fitting system-specific analytic potential energy functions with methods adapted to large systems. Here we consider both approaches. First we consider three fragment methods that allow a given monomer to appear in more than one fragment. The first two approaches are the electrostatically embedded many-body (EE-MB) expansion and the electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), which we have shown to yield quite accurate results even when one restricts the calculations to include only electrostatically embedded dimers. The third fragment method is the electrostatically embedded molecular tailoring approach (EE-MTA), which is more flexible than EE-MB and EE-MB-CE. We show that electrostatic embedding greatly improves the accuracy of these approaches compared with the original unembedded approaches. Quantum mechanical fragment methods share with combined quantum mechanical/molecular mechanical (QM/MM) methods the need to treat a quantum mechanical fragment in the presence of the rest of the system, which is especially challenging for those parts of the rest of the system that are close to the boundary of the quantum mechanical fragment. This is a delicate matter even for fragments that are not covalently bonded to the rest of the system, but it becomes even more difficult when the boundary of the quantum mechanical fragment cuts a bond. We have developed a suite of methods for more realistically treating interactions across such boundaries. These methods include redistributing and balancing the external partial atomic charges and the use of tuned fluorine atoms for capping dangling bonds, and we have shown that they can greatly improve the accuracy. Finally we present a new approach that goes beyond QM/MM by combining the convenience of molecular mechanics with the accuracy of fitting a potential function to electronic structure calculations on a specific system. To make the latter practical for systems with a large number of degrees of freedom, we developed a method to interpolate between local internal-coordinate fits to the potential energy. A key issue for the application to large systems is that rather than assigning the atoms or monomers to fragments, we assign the internal coordinates to reaction, secondary, and tertiary sets. Thus, we make a partition in coordinate space rather than atom space. Fits to the local dependence of the potential energy on tertiary coordinates are arrayed along a preselected reaction coordinate at a sequence of geometries called anchor points; the potential energy function is called an anchor points reactive potential. Electrostatically embedded fragment methods and the anchor points reactive potential, because they are based on treating an entire system by quantum mechanical electronic structure methods but are affordable for large and complex systems, have the potential to open new areas for accurate simulations where combined QM/MM methods are inadequate.

  14. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  15. Integrative Systems Biology for Data Driven Knowledge Discovery

    PubMed Central

    Greene, Casey S.; Troyanskaya, Olga G.

    2015-01-01

    Integrative systems biology is an approach that brings together diverse high throughput experiments and databases to gain new insights into biological processes or systems at molecular through physiological levels. These approaches rely on diverse high-throughput experimental techniques that generate heterogeneous data by assaying varying aspects of complex biological processes. Computational approaches are necessary to provide an integrative view of these experimental results and enable data-driven knowledge discovery. Hypotheses generated from these approaches can direct definitive molecular experiments in a cost effective manner. Using integrative systems biology approaches, we can leverage existing biological knowledge and large-scale data to improve our understanding of yet unknown components of a system of interest and how its malfunction leads to disease. PMID:21044756

  16. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  17. The Utilization of Psychologists for Staff Development in a Large Public School System: A Staff Development Director's Perspective.

    ERIC Educational Resources Information Center

    Stone, James L., Jr.

    This model proposes the TAP Team approach as an on-site delivery system for local school staff development in large, urban school systems. TAP emphasizes in-service training for both upgrading skills of staff and for helping staff acquire new skills in the areas of coping strategies, classroom management, communication skills, instructional…

  18. An interfaces approach to TES ground data system processing design with the Science Investigator-led Processing System (SIPS)

    NASA Technical Reports Server (NTRS)

    Kurian, R.; Grifin, A.

    2002-01-01

    Developing production-quality software to process the large volumes of scientific data is the responsibility of the TES Ground Data System, which is being developed at the Jet Propulsion Laboratory together with support contractor Raytheon/ITSS. The large data volume and processing requirements of the TES pose significant challenges to the design.

  19. Time-Dependent Thomas-Fermi Approach for Electron Dynamics in Metal Clusters

    NASA Astrophysics Data System (ADS)

    Domps, A.; Reinhard, P.-G.; Suraud, E.

    1998-06-01

    We propose a time-dependent Thomas-Fermi approach to the (nonlinear) dynamics of many-fermion systems. The approach relies on a hydrodynamical picture describing the system in terms of collective flow. We investigate in particular an application to electron dynamics in metal clusters. We make extensive comparisons with fully fledged quantal dynamical calculations and find overall good agreement. The approach thus provides a reliable and inexpensive scheme to study the electronic response of large metal clusters.

  20. Will Systems Biology Deliver Its Promise and Contribute to the Development of New or Improved Vaccines? What Really Constitutes the Study of "Systems Biology" and How Might Such an Approach Facilitate Vaccine Design.

    PubMed

    Germain, Ronald N

    2017-10-16

    A dichotomy exists in the field of vaccinology about the promise versus the hype associated with application of "systems biology" approaches to rational vaccine design. Some feel it is the only way to efficiently uncover currently unknown parameters controlling desired immune responses or discover what elements actually mediate these responses. Others feel that traditional experimental, often reductionist, methods for incrementally unraveling complex biology provide a more solid way forward, and that "systems" approaches are costly ways to collect data without gaining true insight. Here I argue that both views are inaccurate. This is largely because of confusion about what can be gained from classical experimentation versus statistical analysis of large data sets (bioinformatics) versus methods that quantitatively explain emergent properties of complex assemblies of biological components, with the latter reflecting what was previously called "physiology." Reductionist studies will remain essential for generating detailed insight into the functional attributes of specific elements of biological systems, but such analyses lack the power to provide a quantitative and predictive understanding of global system behavior. But by employing (1) large-scale screening methods for discovery of unknown components and connections in the immune system ( omics ), (2) statistical analysis of large data sets ( bioinformatics ), and (3) the capacity of quantitative computational methods to translate these individual components and connections into models of emergent behavior ( systems biology ), we will be able to better understand how the overall immune system functions and to determine with greater precision how to manipulate it to produce desired protective responses. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.

  1. Full Quantum Dynamics Simulation of a Realistic Molecular System Using the Adaptive Time-Dependent Density Matrix Renormalization Group Method.

    PubMed

    Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo

    2018-01-18

    The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.

  2. On the accuracy of modelling the dynamics of large space structures

    NASA Technical Reports Server (NTRS)

    Diarra, C. M.; Bainum, P. M.

    1985-01-01

    Proposed space missions will require large scale, light weight, space based structural systems. Large space structure technology (LSST) systems will have to accommodate (among others): ocean data systems; electronic mail systems; large multibeam antenna systems; and, space based solar power systems. The structures are to be delivered into orbit by the space shuttle. Because of their inherent size, modelling techniques and scaling algorithms must be developed so that system performance can be predicted accurately prior to launch and assembly. When the size and weight-to-area ratio of proposed LSST systems dictate that the entire system be considered flexible, there are two basic modeling methods which can be used. The first is a continuum approach, a mathematical formulation for predicting the motion of a general orbiting flexible body, in which elastic deformations are considered small compared with characteristic body dimensions. This approach is based on an a priori knowledge of the frequencies and shape functions of all modes included within the system model. Alternatively, finite element techniques can be used to model the entire structure as a system of lumped masses connected by a series of (restoring) springs and possibly dampers. In addition, a computational algorithm was developed to evaluate the coefficients of the various coupling terms in the equations of motion as applied to the finite element model of the Hoop/Column.

  3. An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce

    2012-01-01

    Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.

  4. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  5. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  6. An Approach for Removing Redundant Data from RFID Data Streams

    PubMed Central

    Mahdin, Hairulnizam; Abawajy, Jemal

    2011-01-01

    Radio frequency identification (RFID) systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches. PMID:22163730

  7. Hybrid estimation of complex systems.

    PubMed

    Hofbaur, Michael W; Williams, Brian C

    2004-10-01

    Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.

  8. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale Bayesian networks by composition. This compositional approach reflects how (often redundant) subsystems are architected to form systems such as electrical power systems. We develop high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems. The largest among these 24 Bayesian networks contains over 1,000 random variables. Another BN represents the real-world electrical power system ADAPT, which is representative of electrical power systems deployed in aerospace vehicles. In addition to demonstrating the scalability of the compositional approach, we briefly report on experimental results from the diagnostic competition DXC, where the ProADAPT team, using techniques discussed here, obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the industrial track. While we consider diagnosis of power systems specifically, we believe this work is relevant to other system health management problems, in particular in dependable systems such as aircraft and spacecraft. (See CASI ID 20100021910 for supplemental data disk.)

  9. Final Technical Report: Distributed Controls for High Penetrations of Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Neely, Jason C.; Rashkin, Lee J.

    2015-12-01

    The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with themore » optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.« less

  10. A Holistic Management Architecture for Large-Scale Adaptive Networks

    DTIC Science & Technology

    2007-09-01

    transmission and processing overhead required for management. The challenges of building models to describe dynamic systems are well-known to the field of...increases the challenge of finding a simple approach to assessing the state of the network. Moreover, the performance state of one network link may be... challenging . These obstacles indicate the need for a less comprehensive-analytical, more systemic-holistic approach to managing networks. This approach might

  11. Toolkit Approach to Integrating Library Resources into the Learning Management System

    ERIC Educational Resources Information Center

    Black, Elizabeth L.

    2008-01-01

    As use of learning management systems (LMS) increases, it is essential that librarians are there. Ohio State University Libraries took a toolkit approach to integrate library content in the LMS to facilitate creative and flexible interactions between librarians, students and faculty in Ohio State University's large and decentralized academic…

  12. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    PubMed

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  13. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  14. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    PubMed

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  15. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  16. Scalable cluster administration - Chiba City I approach and lessons learned.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navarro, J. P.; Evard, R.; Nurmi, D.

    2002-07-01

    Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less

  17. The "Trojan Horse" approach to tumor immunotherapy: targeting the tumor microenvironment.

    PubMed

    Nelson, Delia; Fisher, Scott; Robinson, Bruce

    2014-01-01

    Most anticancer therapies including immunotherapies are given systemically; yet therapies given directly into tumors may be more effective, particularly those that overcome natural suppressive factors in the tumor microenvironment. The "Trojan Horse" approach of intratumoural delivery aims to promote immune-mediated destruction by inducing microenvironmental changes within the tumour at the same time as avoiding the systemic toxicity that is often associated with more "full frontal" treatments such as transfer of large numbers of laboratory-expanded tumor-specific cytotoxic T lymphocytes or large intravenous doses of cytokine. Numerous studies have demonstrated that intratumoural therapy has the capacity to minimizing local suppression, inducing sufficient "dangerous" tumor cell death to cross-prime strong immune responses, and rending tumor blood vessels amenable to immune cell traffic to induce effector cell changes in secondary lymphoid organs. However, the key to its success is the design of a sound rational approach based on evidence. There is compelling preclinical data for local immunotherapy approaches in tumor immunology. This review summarises how immune events within a tumour can be modified by local approaches, how this can affect systemic antitumor immunity such that distal sites are attacked, and what approaches have been proven most successful so far in animals and patients.

  18. Evaluating Recommendation Systems

    NASA Astrophysics Data System (ADS)

    Shani, Guy; Gunawardana, Asela

    Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the properties that they evaluate.

  19. From a Proven Correct Microkernel to Trustworthy Large Systems

    NASA Astrophysics Data System (ADS)

    Andronick, June

    The seL4 microkernel was the world's first general-purpose operating system kernel with a formal, machine-checked proof of correctness. The next big step in the challenge of building truly trustworthy systems is to provide a framework for developing secure systems on top of seL4. This paper first gives an overview of seL4's correctness proof, together with its main implications and assumptions, and then describes our approach to provide formal security guarantees for large, complex systems.

  20. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  1. Renaissance: A revolutionary approach for providing low-cost ground data systems

    NASA Technical Reports Server (NTRS)

    Butler, Madeline J.; Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1996-01-01

    The NASA is changing its attention from large missions to a greater number of smaller missions with reduced development schedules and budgets. In relation to this, the Renaissance Mission Operations and Data Systems Directorate systems engineering process is presented. The aim of the Renaissance approach is to improve system performance, reduce cost and schedules and meet specific customer needs. The approach includes: the early involvement of the users to define the mission requirements and system architectures; the streamlining of management processes; the development of a flexible cost estimation capability, and the ability to insert technology. Renaissance-based systems demonstrate significant reuse of commercial off-the-shelf building blocks in an integrated system architecture.

  2. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  3. An efficient approach for surveillance of childhood diabetes by type derived from electronic health record data: the SEARCH for Diabetes in Youth Study

    PubMed Central

    Zhong, Victor W; Obeid, Jihad S; Craig, Jean B; Pfaff, Emily R; Thomas, Joan; Jaacks, Lindsay M; Beavers, Daniel P; Carey, Timothy S; Lawrence, Jean M; Dabelea, Dana; Hamman, Richard F; Bowlby, Deborah A; Pihoker, Catherine; Saydah, Sharon H

    2016-01-01

    Objective To develop an efficient surveillance approach for childhood diabetes by type across 2 large US health care systems, using phenotyping algorithms derived from electronic health record (EHR) data. Materials and Methods Presumptive diabetes cases <20 years of age from 2 large independent health care systems were identified as those having ≥1 of the 5 indicators in the past 3.5 years, including elevated HbA1c, elevated blood glucose, diabetes-related billing codes, patient problem list, and outpatient anti-diabetic medications. EHRs of all the presumptive cases were manually reviewed, and true diabetes status and diabetes type were determined. Algorithms for identifying diabetes cases overall and classifying diabetes type were either prespecified or derived from classification and regression tree analysis. Surveillance approach was developed based on the best algorithms identified. Results We developed a stepwise surveillance approach using billing code–based prespecified algorithms and targeted manual EHR review, which efficiently and accurately ascertained and classified diabetes cases by type, in both health care systems. The sensitivity and positive predictive values in both systems were approximately ≥90% for ascertaining diabetes cases overall and classifying cases with type 1 or type 2 diabetes. About 80% of the cases with “other” type were also correctly classified. This stepwise surveillance approach resulted in a >70% reduction in the number of cases requiring manual validation compared to traditional surveillance methods. Conclusion EHR data may be used to establish an efficient approach for large-scale surveillance for childhood diabetes by type, although some manual effort is still needed. PMID:27107449

  4. Self-* properties through gossiping.

    PubMed

    Babaoglu, Ozalp; Jelasity, Márk

    2008-10-28

    As computer systems have become more complex, numerous competing approaches have been proposed for these systems to self-configure, self-manage, self-repair, etc. such that human intervention in their operation can be minimized. In ubiquitous systems, this has always been a central issue as well. In this paper, we overview techniques to implement self-* properties in large-scale, decentralized networks through bio-inspired techniques in general, and gossip-based algorithms in particular. We believe that gossip-based algorithms could be an important inspiration for solving problems in ubiquitous computing as well. As an example, we outline a novel approach to arrange large numbers of mobile agents (e.g. vehicles, rescue teams carrying mobile devices) into different formations in a totally decentralized manner. The approach is inspired by the biological mechanism of cell sorting via differential adhesion, as well as by our earlier work in self-organizing peer-to-peer overlay networks.

  5. Shield system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, D.R.; Chandler, J.R.; Church, J.P.

    1979-01-01

    The SHIELD system is a powerful new computational tool for calculation of isotopic inventory, radiation sources, decay heat, and shielding assessment in part of the nuclear fuel cycle. The integrated approach used in this system permitss the communication and management of large fields of numbers efficiently thus permitting the user to address the technical rather than computer aspects of a problem. Emphasis on graphical outputs permits large fields of resulting numbers to be efficiently displayed.

  6. Static Analysis of Large-Scale Multibody System Using Joint Coordinates and Spatial Algebra Operator

    PubMed Central

    Omar, Mohamed A.

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations. PMID:25045732

  7. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    PubMed

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  8. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  9. Uncertainty management in intelligent design aiding systems

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Gabbert, Paula S.

    1988-01-01

    A novel approach to uncertainty management which is particularly effective in intelligent design aiding systems for large-scale systems is presented. The use of this approach in the materials handling system design domain is discussed. It is noted that, during any point in the design process, a point value can be obtained for the evaluation of feasible designs; however, the techniques described provide unique solutions for these point values using only the current information about the design environment.

  10. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  11. Intracoronary stent implantation: new approach using a monorail system and new large-lumen 7F catheters from the brachial route.

    PubMed

    Jenny, D B; Robert, G P; Fajadet, J C; Cassagneau, B G; Marco, J

    1992-04-01

    In this brief report we describe a case of successful multivessel PTCA with intracoronary stent implantation using a new large-lumen 7F catheter from the left brachial approach. The application of this technique should be considered for intravascular stent implantation when anticoagulation ideally should not be interrupted or in anatomical situations limiting femoral vascular access.

  12. Combining electronic structure and many-body theory with large databases: A method for predicting the nature of 4 f states in Ce compounds

    NASA Astrophysics Data System (ADS)

    Herper, H. C.; Ahmed, T.; Wills, J. M.; Di Marco, I.; Björkman, T.; Iuşan, D.; Balatsky, A. V.; Eriksson, O.

    2017-08-01

    Recent progress in materials informatics has opened up the possibility of a new approach to accessing properties of materials in which one assays the aggregate properties of a large set of materials within the same class in addition to a detailed investigation of each compound in that class. Here we present a large scale investigation of electronic properties and correlated magnetism in Ce-based compounds accompanied by a systematic study of the electronic structure and 4 f -hybridization function of a large body of Ce compounds. We systematically study the electronic structure and 4 f -hybridization function of a large body of Ce compounds with the goal of elucidating the nature of the 4 f states and their interrelation with the measured Kondo energy in these compounds. The hybridization function has been analyzed for more than 350 data sets (being part of the IMS database) of cubic Ce compounds using electronic structure theory that relies on a full-potential approach. We demonstrate that the strength of the hybridization function, evaluated in this way, allows us to draw precise conclusions about the degree of localization of the 4 f states in these compounds. The theoretical results are entirely consistent with all experimental information, relevant to the degree of 4 f localization for all investigated materials. Furthermore, a more detailed analysis of the electronic structure and the hybridization function allows us to make precise statements about Kondo correlations in these systems. The calculated hybridization functions, together with the corresponding density of states, reproduce the expected exponential behavior of the observed Kondo temperatures and prove a consistent trend in real materials. This trend allows us to predict which systems may be correctly identified as Kondo systems. A strong anticorrelation between the size of the hybridization function and the volume of the systems has been observed. The information entropy for this set of systems is about 0.42. Our approach demonstrates the predictive power of materials informatics when a large number of materials is used to establish significant trends. This predictive power can be used to design new materials with desired properties. The applicability of this approach for other correlated electron systems is discussed.

  13. Hierarchical optimal control of large-scale nonlinear chemical processes.

    PubMed

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  14. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  15. A Segmented Ion-Propulsion Engine

    NASA Technical Reports Server (NTRS)

    Brophy, John R.

    1992-01-01

    New design approach for high-power (100-kW class or greater) ion engines conceptually divides single engine into combination of smaller discharge chambers integrated to operate as single large engine. Analogous to multicylinder automobile engine, benefits include reduction in required accelerator system span-to-gap ratio for large-area engines, reduction in required hollow-cathode emission current, mitigation of plasma-uniformity problem, increased tolerance to accelerator system faults, and reduction in vacuum-system pumping speed.

  16. Reasoning Mind Genie 2: An Intelligent Tutoring System as a Vehicle for International Transfer of Instructional Methods in Mathematics

    ERIC Educational Resources Information Center

    Khachatryan, George A.; Romashov, Andrey V.; Khachatryan, Alexander R.; Gaudino, Steven J.; Khachatryan, Julia M.; Guarian, Konstantin R.; Yufa, Nataliya V.

    2014-01-01

    Effective mathematics teachers have a large body of professional knowledge, which is largely undocumented and shared by teachers working in a given country's education system. The volume and cultural nature of this knowledge make it particularly challenging to share curricula and instructional methods between countries. Thus, approaches based on…

  17. Mean-field approaches to the totally asymmetric exclusion process with quenched disorder and large particles

    NASA Astrophysics Data System (ADS)

    Shaw, Leah B.; Sethna, James P.; Lee, Kelvin H.

    2004-08-01

    The process of protein synthesis in biological systems resembles a one-dimensional driven lattice gas in which the particles (ribosomes) have spatial extent, covering more than one lattice site. Realistic, nonuniform gene sequences lead to quenched disorder in the particle hopping rates. We study the totally asymmetric exclusion process with large particles and quenched disorder via several mean-field approaches and compare the mean-field results with Monte Carlo simulations. Mean-field equations obtained from the literature are found to be reasonably effective in describing this system. A numerical technique is developed for computing the particle current rapidly. The mean-field approach is extended to include two-point correlations between adjacent sites. The two-point results are found to match Monte Carlo simulations more closely.

  18. Fluid-structure interaction involving large deformations: 3D simulations and applications to biological systems

    NASA Astrophysics Data System (ADS)

    Tian, Fang-Bao; Dai, Hu; Luo, Haoxiang; Doyle, James F.; Rousseau, Bernard

    2014-02-01

    Three-dimensional fluid-structure interaction (FSI) involving large deformations of flexible bodies is common in biological systems, but accurate and efficient numerical approaches for modeling such systems are still scarce. In this work, we report a successful case of combining an existing immersed-boundary flow solver with a nonlinear finite-element solid-mechanics solver specifically for three-dimensional FSI simulations. This method represents a significant enhancement from the similar methods that are previously available. Based on the Cartesian grid, the viscous incompressible flow solver can handle boundaries of large displacements with simple mesh generation. The solid-mechanics solver has separate subroutines for analyzing general three-dimensional bodies and thin-walled structures composed of frames, membranes, and plates. Both geometric nonlinearity associated with large displacements and material nonlinearity associated with large strains are incorporated in the solver. The FSI is achieved through a strong coupling and partitioned approach. We perform several validation cases, and the results may be used to expand the currently limited database of FSI benchmark study. Finally, we demonstrate the versatility of the present method by applying it to the aerodynamics of elastic wings of insects and the flow-induced vocal fold vibration.

  19. Fluid–structure interaction involving large deformations: 3D simulations and applications to biological systems

    PubMed Central

    Tian, Fang-Bao; Dai, Hu; Luo, Haoxiang; Doyle, James F.; Rousseau, Bernard

    2013-01-01

    Three-dimensional fluid–structure interaction (FSI) involving large deformations of flexible bodies is common in biological systems, but accurate and efficient numerical approaches for modeling such systems are still scarce. In this work, we report a successful case of combining an existing immersed-boundary flow solver with a nonlinear finite-element solid-mechanics solver specifically for three-dimensional FSI simulations. This method represents a significant enhancement from the similar methods that are previously available. Based on the Cartesian grid, the viscous incompressible flow solver can handle boundaries of large displacements with simple mesh generation. The solid-mechanics solver has separate subroutines for analyzing general three-dimensional bodies and thin-walled structures composed of frames, membranes, and plates. Both geometric nonlinearity associated with large displacements and material nonlinearity associated with large strains are incorporated in the solver. The FSI is achieved through a strong coupling and partitioned approach. We perform several validation cases, and the results may be used to expand the currently limited database of FSI benchmark study. Finally, we demonstrate the versatility of the present method by applying it to the aerodynamics of elastic wings of insects and the flow-induced vocal fold vibration. PMID:24415796

  20. Global Consensus Theorem and Self-Organized Criticality: Unifying Principles for Understanding Self-Organization, Swarm Intelligence and Mechanisms of Carcinogenesis

    PubMed Central

    Rosenfeld, Simon

    2013-01-01

    Complex biological systems manifest a large variety of emergent phenomena among which prominent roles belong to self-organization and swarm intelligence. Generally, each level in a biological hierarchy possesses its own systemic properties and requires its own way of observation, conceptualization, and modeling. In this work, an attempt is made to outline general guiding principles in exploration of a wide range of seemingly dissimilar phenomena observed in large communities of individuals devoid of any personal intelligence and interacting with each other through simple stimulus-response rules. Mathematically, these guiding principles are well captured by the Global Consensus Theorem (GCT) equally applicable to neural networks and to Lotka-Volterra population dynamics. Universality of the mechanistic principles outlined by GCT allows for a unified approach to such diverse systems as biological networks, communities of social insects, robotic communities, microbial communities, communities of somatic cells, social networks and many other systems. Another cluster of universal laws governing the self-organization in large communities of locally interacting individuals is built around the principle of self-organized criticality (SOC). The GCT and SOC, separately or in combination, provide a conceptual basis for understanding the phenomena of self-organization occurring in large communities without involvement of a supervisory authority, without system-wide informational infrastructure, and without mapping of general plan of action onto cognitive/behavioral faculties of its individual members. Cancer onset and proliferation serves as an important example of application of these conceptual approaches. In this paper, the point of view is put forward that apparently irreconcilable contradictions between two opposing theories of carcinogenesis, that is, the Somatic Mutation Theory and the Tissue Organization Field Theory, may be resolved using the systemic approaches provided by GST and SOC. PMID:23471309

  1. Taking the Mystery Out of Research in Computing Information Systems: A New Approach to Teaching Research Paradigm Architecture.

    ERIC Educational Resources Information Center

    Heslin, J. Alexander, Jr.

    In senior-level undergraduate research courses in Computer Information Systems (CIS), students are required to read and assimilate a large volume of current research literature. One course objective is to demonstrate to the student that there are patterns or models or paradigms of research. A new approach in identifying research paradigms is…

  2. A New Approach to Create Image Control Networks in ISIS

    NASA Astrophysics Data System (ADS)

    Becker, K. J.; Berry, K. L.; Mapel, J. A.; Walldren, J. C.

    2017-06-01

    A new approach was used to create a feature-based control point network that required the development of new tools in the Integrated Software for Imagers and Spectrometers (ISIS3) system to process very large datasets.

  3. Practice Makes Perfect?: Effective Practice Instruction in Large Ensembles

    ERIC Educational Resources Information Center

    Prichard, Stephanie

    2012-01-01

    Helping young musicians learn how to practice effectively is a challenge faced by all music educators. This article presents a system of individual music practice instruction that can be seamlessly integrated within large-ensemble rehearsals. Using a step-by-step approach, large-ensemble conductors can teach students to identify and isolate…

  4. Large-Scale High School Reform through School Improvement Networks: Exploring Possibilities for "Developmental Evaluation"

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.

    2016-01-01

    Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…

  5. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  6. A boundary value approach for solving three-dimensional elliptic and hyperbolic partial differential equations.

    PubMed

    Biala, T A; Jator, S N

    2015-01-01

    In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.

  7. Cooperative multi-user detection and ranging based on pseudo-random codes

    NASA Astrophysics Data System (ADS)

    Morhart, C.; Biebl, E. M.

    2009-05-01

    We present an improved approach for a Round Trip Time of Flight distance measurement system. The system is intended for the usage in a cooperative localisation system for automotive applications. Therefore, it is designed to address a large number of communication partners per measurement cycle. By using coded signals in a time divison multiple access order, we can detect a large number of pedestrian sensors with just one car sensor. We achieve this by using very short transmit bursts in combination with a real time correlation algorithm. Futhermore, the correlation approach offers real time data, concerning the time of arrival, that can serve as a trigger impulse for other comunication systems. The distance accuracy of the correlation result was further increased by adding a fourier interpolation filter. The system performance was checked with a prototype at 2.4 GHz. We reached a distance measurement accuracy of 12 cm at a range up to 450 m.

  8. A Functional Subnetwork Approach to Designing Synthetic Nervous Systems That Control Legged Robot Locomotion

    PubMed Central

    Szczecinski, Nicholas S.; Hunt, Alexander J.; Quinn, Roger D.

    2017-01-01

    A dynamical model of an animal’s nervous system, or synthetic nervous system (SNS), is a potentially transformational control method. Due to increasingly detailed data on the connectivity and dynamics of both mammalian and insect nervous systems, controlling a legged robot with an SNS is largely a problem of parameter tuning. Our approach to this problem is to design functional subnetworks that perform specific operations, and then assemble them into larger models of the nervous system. In this paper, we present networks that perform addition, subtraction, multiplication, division, differentiation, and integration of incoming signals. Parameters are set within each subnetwork to produce the desired output by utilizing the operating range of neural activity, R, the gain of the operation, k, and bounds based on biological values. The assembly of large networks from functional subnetworks underpins our recent results with MantisBot. PMID:28848419

  9. Computing physical properties with quantum Monte Carlo methods with statistical fluctuations independent of system size.

    PubMed

    Assaraf, Roland

    2014-12-01

    We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.

  10. Iterative User-Centered Design of a Next Generation Patient Monitoring System for Emergency Medical Response

    PubMed Central

    Gao, Tia; Kim, Matthew I.; White, David; Alm, Alexander M.

    2006-01-01

    We have developed a system for real-time patient monitoring during large-scale disasters. Our system is designed with scalable algorithms to monitor large numbers of patients, an intuitive interface to support the overwhelmed responders, and ad-hoc mesh networking capabilities to maintain connectivity to patients in the chaotic settings. This paper describes an iterative approach to user-centered design adopted to guide development of our system. This system is a part of the Advanced Health and Disaster Aid Network (AID-N) architecture. PMID:17238348

  11. Approaches to lunar base life support

    NASA Technical Reports Server (NTRS)

    Brown, M. F.; Edeen, M. A.

    1990-01-01

    Various approaches to reliable, low maintenance, low resupply regenerative long-term life support for lunar base application are discussed. The first approach utilizes Space Station Freedom physiochemical systems technology which has closed air and water loops with approximately 99 and 90 percent closure respectively, with minor subsystem changes to the SSF baseline improving the level of water resupply for the water loop. A second approach would be a physiochemical system, including a solid waste processing system and improved air and water loop closure, which would require only food and nitrogen for resupply. A hybrid biological/physiochemical life support system constitutes the third alternative, incorporating some level of food production via plant growth into the life support system. The approaches are described in terms of mass, power, and resupply requirements; and the potential evolution of a small, initial outpost to a large, self-sustaining base is discussed.

  12. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    NASA Astrophysics Data System (ADS)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.

  13. Towards Behavioral Reflexion Models

    NASA Technical Reports Server (NTRS)

    Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance

    2009-01-01

    Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.

  14. Designing Domain-Specific HUMS Architectures: An Automated Approach

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Agarwal, Neha; Kumar, Pramod; Sundaram, Parthiban

    2004-01-01

    The HUMS automation system automates the design of HUMS architectures. The automated design process involves selection of solutions from a large space of designs as well as pure synthesis of designs. Hence the whole objective is to efficiently search for or synthesize designs or parts of designs in the database and to integrate them to form the entire system design. The automation system adopts two approaches in order to produce the designs: (a) Bottom-up approach and (b) Top down approach. Both the approaches are endowed with a Suite of quantitative and quantitative techniques that enable a) the selection of matching component instances, b) the determination of design parameters, c) the evaluation of candidate designs at component-level and at system-level, d) the performance of cost-benefit analyses, e) the performance of trade-off analyses, etc. In short, the automation system attempts to capitalize on the knowledge developed from years of experience in engineering, system design and operation of the HUMS systems in order to economically produce the most optimal and domain-specific designs.

  15. Complexity, Robustness, and Network Thermodynamics in Large-Scale and Multiagent Systems: A Hybrid Control Approach

    DTIC Science & Technology

    2012-01-11

    dynamic behavior , wherein a dissipative dynamical system can deliver only a fraction of its energy to its surroundings and can store only a fraction of the...collection of interacting subsystems. The behavior and properties of the aggregate large-scale system can then be deduced from the behaviors of the...uniqueness is established. This state space formalism of thermodynamics shows that the behavior of heat, as described by the conservation equations of

  16. Plant-Soil Feedback: Bridging Natural and Agricultural Sciences.

    PubMed

    Mariotte, Pierre; Mehrabi, Zia; Bezemer, T Martijn; De Deyn, Gerlinde B; Kulmatiski, Andrew; Drigo, Barbara; Veen, G F Ciska; van der Heijden, Marcel G A; Kardol, Paul

    2018-02-01

    In agricultural and natural systems researchers have demonstrated large effects of plant-soil feedback (PSF) on plant growth. However, the concepts and approaches used in these two types of systems have developed, for the most part, independently. Here, we present a conceptual framework that integrates knowledge and approaches from these two contrasting systems. We use this integrated framework to demonstrate (i) how knowledge from complex natural systems can be used to increase agricultural resource-use efficiency and productivity and (ii) how research in agricultural systems can be used to test hypotheses and approaches developed in natural systems. Using this framework, we discuss avenues for new research toward an ecologically sustainable and climate-smart future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: Overview and Air-side System Description

    NASA Technical Reports Server (NTRS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; hide

    2016-01-01

    This work presents an overview of the This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes., a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  18. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  19. The Ideal Oriented Co-design Approach Revisited

    NASA Astrophysics Data System (ADS)

    Johnstone, Christina

    There exist a large number of different methodologies for developing information systems on the market. This implies that there also are a large number of "best" ways of developing those information systems. Avison and Fitzgerald (2003) states that every methodology is built on a philosophy. With philosophy they refer to the underlying attitudes and viewpoints, and the different assumptions and emphases to be found within the specific methodology.

  20. Research directions in large scale systems and decentralized control

    NASA Technical Reports Server (NTRS)

    Tenney, R. R.

    1980-01-01

    Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.

  1. An in-plane nano-mechanics approach to achieve reversible resonance control of photonic crystal nanocavities.

    PubMed

    Chew, Xiongyeu; Zhou, Guangya; Yu, Hongbin; Chau, Fook Siong; Deng, Jie; Loke, Yee Chong; Tang, Xiaosong

    2010-10-11

    Control of photonic crystal resonances in conjunction with large spectral shifting is critical in achieving reconfigurable photonic crystal devices. We propose a simple approach to achieve nano-mechanical control of photonic crystal resonances within a compact integrated on-chip approach. Three different tip designs utilizing an in-plane nano-mechanical tuning approach are shown to achieve reversible and low-loss resonance control on a one-dimensional photonic crystal nanocavity. The proposed nano-mechanical approach driven by a sub-micron micro-electromechanical system integrated on low loss suspended feeding nanowire waveguide, achieved relatively large resonance spectral shifts of up to 18 nm at a driving voltage of 25 V. Such designs may potentially be used as tunable optical filters or switches.

  2. Stability of large DC power systems using switching converters, with application to the international space station

    NASA Technical Reports Server (NTRS)

    Manners, B.; Gholdston, E. W.; Karimi, K.; Lee, F. C.; Rajagopalan, J.; Panov, Y.

    1996-01-01

    As space direct current (dc) power systems continue to grow in size, switching power converters are playing an ever larger role in power conditioning and control. When designing a large dc system using power converters of this type, special attention must be placed on the electrical stability of the system and of the individual loads on the system. In the design of the electric power system (EPS) of the International Space Station (ISS), the National Aeronautics and Space Administration (NASA) and its contractor team led by Boeing Defense & Space Group has placed a great deal of emphasis on designing for system and load stability. To achieve this goal, the team has expended considerable effort deriving a dear concept on defining system stability in both a general sense and specifically with respect to the space station. The ISS power system presents numerous challenges with respect to system stability, such as high power, complex sources and undefined loads. To complicate these issues, source and load components have been designed in parallel by three major subcontractors (Boeing, Rocketdyne, and McDonnell Douglas) with interfaces to both sources and loads being designed in different countries (Russia, Japan, Canada, Europe, etc.). These issues, coupled with the program goal of limiting costs, have proven a significant challenge to the program. As a result, the program has derived an impedance specification approach for system stability. This approach is based on the significant relationship between source and load impedances and the effect of this relationship on system stability. This approach is limited in its applicability by the theoretical and practical limits on component designs as presented by each system segment. As a result, the overall approach to system stability implemented by the ISS program consists of specific hardware requirements coupled with extensive system analysis and hardware testing. Following this approach, the ISS program plans to begin construction of the world's largest orbiting power system in 1997.

  3. A review and meta-analysis of the enemy release hypothesis in plant–herbivorous insect systems

    PubMed Central

    Meijer, Kim; Schilthuizen, Menno; Beukeboom, Leo

    2016-01-01

    A suggested mechanism for the success of introduced non-native species is the enemy release hypothesis (ERH). Many studies have tested the predictions of the ERH using the community approach (native and non-native species studied in the same habitat) or the biogeographical approach (species studied in their native and non-native range), but results are highly variable, possibly due to large variety of study systems incorporated. We therefore focused on one specific system: plants and their herbivorous insects. We performed a systematic review and compiled a large number (68) of datasets from studies comparing herbivorous insects on native and non-native plants using the community or biogeographical approach. We performed a meta-analysis to test the predictions from the ERH for insect diversity (number of species), insect load (number of individuals) and level of herbivory for both the community and biogeographical approach. For both the community and biogeographical approach insect diversity was significantly higher on native than on non-native plants. Insect load tended to be higher on native than non-native plants at the community approach only. Herbivory was not different between native and non-native plants at the community approach, while there was too little data available for testing the biogeographical approach. Our meta-analysis generally supports the predictions from the ERH for both the community and biogeographical approach, but also shows that the outcome is importantly determined by the response measured and approach applied. So far, very few studies apply both approaches simultaneously in a reciprocal manner while this is arguably the best way for testing the ERH. PMID:28028463

  4. Orion Flight Test Architecture Benefits of MBSE Approach

    NASA Technical Reports Server (NTRS)

    Reed, Don; Simpson, Kim

    2012-01-01

    Exploration Flight Test 1 (EFT-1) is an unmanned first orbital flight test of the Multi Purpose Crew Vehicle (MPCV) Mission s purpose is to: Test Orion s ascent, on-orbit and entry capabilities Monitor critical activities Provide ground control in support of contingency scenarios Requires development of a large scale end-to-end information system network architecture To effectively communicate the scope of the end-to-end system a model-based system engineering approach was chosen.

  5. Nonleaky Population Transfer in a Transmon Qutrit via Largely-Detuned Drivings

    NASA Astrophysics Data System (ADS)

    Yan, Run-Ying; Feng, Zhi-Bo

    2018-06-01

    We propose an efficient scheme to implement nonleaky population transfer in a transmon qutrit via largely-detuned drivings. Due to weak level anharmonicity of the transmon system, the remarkable quantum leakages need to be considered in quantum coherent operations. Under the conditions of two-photon resonance and large detunings, the robust population transfer within a qutrit can be implemented via the technique of stimulated Raman adiabatic passage. Based on the accessible parameters, the feasible approach can remove the leakage error effectively, and then provides a potential approach for enhancing the transfer fidelity with transmon-regime artificial atoms experimentally.

  6. A controller design approach for large flexible space structures

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1981-01-01

    A controller design approach for large space structures is presented, which consists of a primary attitude controller and a secondary or damping enhancement controller. The secondary controller, which uses several Annular Momentum Control Device (AMCD's), is shown to make the closed loop system asymptotically stable under relatively simple conditions. The primary controller using torque actuators (or AMCD's) and colocated attitude and rate sensors is shown to be stable. It is shown that the same AMCD's can be used for simultaneous actuation of primary and secondary controllers. Numerical results are obtained for a large, thin, completely free plate model.

  7. Systems Biology and Mode of Action Based Risk Assessment

    EPA Science Inventory

    The application of systems biology has increased in the past decade largely as a consequence of the human genome project and technological advances in genomics and proteomics. Systems approaches have been used in the medical & pharmaceutical realm for diagnostic purposes and targ...

  8. Thermal Environment for Classrooms. Central System Approach to Air Conditioning.

    ERIC Educational Resources Information Center

    Triechler, Walter W.

    This speech compares the air conditioning requirements of high-rise office buildings with those of large centralized school complexes. A description of one particular air conditioning system provides information about the system's arrangement, functions, performance efficiency, and cost effectiveness. (MLF)

  9. Climate change adaptation and Integrated Water Resource Management in the water sector

    NASA Astrophysics Data System (ADS)

    Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim

    2014-10-01

    Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range of future scenarios in order to develop robust adaptation measures and strategies.

  10. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  11. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    ERIC Educational Resources Information Center

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  12. Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1991-01-01

    The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.

  13. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    NASA Astrophysics Data System (ADS)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  14. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  15. Developing Mathematics Teacher Knowledge: The Paradidactic Infrastructure of "Open Lesson" in Japan

    ERIC Educational Resources Information Center

    Miyakawa, Takeshi; Winslow, Carl

    2013-01-01

    In this paper, we first present a theoretical approach to study mathematics teacher knowledge and the conditions for developing it, which is firmly rooted in a systemic approach to didactic phenomena at large, namely the anthropological theory of the didactic. Then, a case of open lesson is presented and analysed, using this theoretical approach,…

  16. MIDAS prototype Multispectral Interactive Digital Analysis System for large area earth resources surveys. Volume 2: Charge coupled device investigation

    NASA Technical Reports Server (NTRS)

    Kriegler, F.; Marshall, R.; Sternberg, S.

    1976-01-01

    MIDAS is a third-generation, fast, low cost, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensors. MIDAS, for example, can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The need for advanced onboard spacecraft processing of remotely sensed data is stated and approaches to this problem are described which are feasible through the use of charge coupled devices. Tentative mechanizations for the required processing operations are given in large block form. These initial designs can serve as a guide to circuit/system designers.

  17. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  18. Nonlinear dynamic analysis of flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Bauchau, Olivier A.; Kang, Nam Kook

    1991-01-01

    Two approaches are developed to analyze the dynamic behavior of flexible multibody systems. In the first approach each body is modeled with a modal methodology in a local non-inertial frame of reference, whereas in the second approach, each body is modeled with a finite element methodology in the inertial frame. In both cases, the interaction among the various elastic bodies is represented by constraint equations. The two approaches were compared for accuracy and efficiency: the first approach is preferable when the nonlinearities are not too strong but it becomes cumbersome and expensive to use when many modes must be used. The second approach is more general and easier to implement but could result in high computation costs for a large system. The constraints should be enforced in a time derivative fashion for better accuracy and stability.

  19. Bioregulatory systems medicine: an innovative approach to integrating the science of molecular networks, inflammation, and systems biology with the patient's autoregulatory capacity?

    PubMed Central

    Goldman, Alyssa W.; Burmeister, Yvonne; Cesnulevicius, Konstantin; Herbert, Martha; Kane, Mary; Lescheid, David; McCaffrey, Timothy; Schultz, Myron; Seilheimer, Bernd; Smit, Alta; St. Laurent, Georges; Berman, Brian

    2015-01-01

    Bioregulatory systems medicine (BrSM) is a paradigm that aims to advance current medical practices. The basic scientific and clinical tenets of this approach embrace an interconnected picture of human health, supported largely by recent advances in systems biology and genomics, and focus on the implications of multi-scale interconnectivity for improving therapeutic approaches to disease. This article introduces the formal incorporation of these scientific and clinical elements into a cohesive theoretical model of the BrSM approach. The authors review this integrated body of knowledge and discuss how the emergent conceptual model offers the medical field a new avenue for extending the armamentarium of current treatment and healthcare, with the ultimate goal of improving population health. PMID:26347656

  20. Distributed design approach in persistent identifiers systems

    NASA Astrophysics Data System (ADS)

    Golodoniuc, Pavel; Car, Nicholas; Klump, Jens

    2017-04-01

    The need to identify both digital and physical objects is ubiquitous in our society. Past and present persistent identifier (PID) systems, of which there is a great variety in terms of technical and social implementations, have evolved with the advent of the Internet, which has allowed for globally unique and globally resolvable identifiers. PID systems have catered for identifier uniqueness, integrity, persistence, and trustworthiness, regardless of the identifier's application domain, the scope of which has expanded significantly in the past two decades. Since many PID systems have been largely conceived and developed by small communities, or even a single organisation, they have faced challenges in gaining widespread adoption and, most importantly, the ability to survive change of technology. This has left a legacy of identifiers that still exist and are being used but which have lost their resolution service. We believe that one of the causes of once successful PID systems fading is their reliance on a centralised technical infrastructure or a governing authority. Golodoniuc et al. (2016) proposed an approach to the development of PID systems that combines the use of (a) the Handle system, as a distributed system for the registration and first-degree resolution of persistent identifiers, and (b) the PID Service (Golodoniuc et al., 2015), to enable fine-grained resolution to different information object representations. The proposed approach solved the problem of guaranteed first-degree resolution of identifiers, but left fine-grained resolution and information delivery under the control of a single authoritative source, posing risk to the long-term availability of information resources. Herein, we develop these approaches further and explore the potential of large-scale decentralisation at all levels: (i) persistent identifiers and information resources registration; (ii) identifier resolution; and (iii) data delivery. To achieve large-scale decentralisation, we propose using Distributed Hash Tables (DHT), Peer Exchange networks (PEX), Magnet Links, and peer-to-peer (P2P) file sharing networks - the technologies that enable applications such as BitTorrent (Wu et al., 2010). The proposed approach introduces reliable information replication and caching mechanisms, eliminating the need for a central PID data store, and increases overall system fault tolerance due to the lack of a single point of failure. The proposed PID system's design aims to ensure trustworthiness of the system and incorporates important aspects of governance, such as the notion of the authoritative source, data integrity, caching, and data replication control.

  1. Benchmarking a soil moisture data assimilation system for agricultural drought monitoring

    USDA-ARS?s Scientific Manuscript database

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this ...

  2. Implementation of a decoupled controller for a magnetic suspension system using electromagnets mounted in a planar array

    NASA Technical Reports Server (NTRS)

    Cox, D. E.; Groom, N. J.

    1994-01-01

    An implementation of a decoupled, single-input/single-output control approach for a large angle magnetic suspension test fixture is described. Numerical and experimental results are presented. The experimental system is a laboratory model large gap magnetic suspension system which provides five degree-of-freedom control of a cylindrical suspended element. The suspended element contains a core composed of permanent magnet material and is levitated above five electromagnets mounted in a planar array.

  3. Public Health Platforms: An Emerging Informatics Approach to Health Professional Learning and Development

    PubMed Central

    Gray, Kathleen

    2016-01-01

    Health informatics has a major role to play in optimising the management and use of data, information and knowledge in health systems. As health systems undergo digital transformation, it is important to consider informatics approaches not only to curriculum content but also to the design of learning environments and learning activities for health professional learning and development. An example of such an informatics approach is the use of large-scale, integrated public health platforms on the Internet as part of health professional learning and development. This article describes selected examples of such platforms, with a focus on how they may influence the direction of health professional learning and development. Significance for public health The landscape of healthcare systems, public health systems, health research systems and professional education systems is fragmented, with many gaps and silos. More sophistication in the management of health data, information, and knowledge, based on public health informatics expertise, is needed to tackle key issues of prevention, promotion and policy-making. Platform technologies represent an emerging large-scale, highly integrated informatics approach to public health, combining the technologies of Internet, the web, the cloud, social technologies, remote sensing and/or mobile apps into an online infrastructure that can allow more synergies in work within and across these systems. Health professional curricula need updating so that the health workforce has a deep and critical understanding of the way that platform technologies are becoming the foundation of the health sector. PMID:27190977

  4. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  5. Orbital assembly and maintenance study. Executive summary. [space erectable structures/structural design criteria

    NASA Technical Reports Server (NTRS)

    Gorman, D.; Grant, C.; Kyrias, G.; Lord, C.; Rombach, J. P.; Salis, M.; Skidmore, R.; Thomas, R.

    1975-01-01

    A sound, practical approach for the assembly and maintenance of very large structures in space is presented. The methods and approaches for assembling two large structures are examined. The maintenance objectives include the investigation of methods to maintain five geosynchronous satellites. The two assembly examples are a 200-meter-diameter radio astronomy telescope and a 1,000-meter-diameter microwave power transmission system. The radio astronomy telescope operates at an 8,000-mile altitude and receives RF signals from space. The microwave power transmission system is part of a solar power satellite that will be used to transmit converted solar energy to microwave ground receivers. Illustrations are included.

  6. Dynamics of flexible bodies in tree topology - A computer oriented approach

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Vandervoort, R. J.; Likins, P. W.

    1984-01-01

    An approach suited for automatic generation of the equations of motion for large mechanical systems (i.e., large space structures, mechanisms, robots, etc.) is presented. The system topology is restricted to a tree configuration. The tree is defined as an arbitrary set of rigid and flexible bodies connected by hinges characterizing relative translations and rotations of two adjoining bodies. The equations of motion are derived via Kane's method. The resulting equation set is of minimum dimension. Dynamical equations are imbedded in a computer program called TREETOPS. Extensive control simulation capability is built in the TREETOPS program. The simulation is driven by an interactive set-up program resulting in an easy to use analysis tool.

  7. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  8. A Method for Populating the Knowledge Base of AFIT’s Domain-Oriented Application Composition System

    DTIC Science & Technology

    1993-12-01

    Analysis ( FODA ). The approach identifies prominent features (similarities) and distinctive features (differences) of software systems within an... analysis approaches we have summarized, the re- searchers described FODA in sufficient detail to use on large domain analysis projects (ones with...Software Technology Center, July 1991. 18. Kang, Kyo C. and others. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report, Software

  9. The influence of protection system failures and preventive maintenance on protection systems in distribution systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meeuwsen, J.J.; Kling, W.L.; Ploem, W.A.G.A.

    1997-01-01

    Protection systems in power systems can fail either by not responding when they should (failure to operate) or by operating when they should not (false tripping). The former type of failure is particularly serious since it may result in the isolation of large sections of the network. However, the probability of a failure to operate can be reduced by carrying out preventive maintenance on protection systems. This paper describes an approach to determine the impact of preventive maintenance on protection systems on the reliability of the power supply to customers. The proposed approach is based on Markov models.

  10. Modularization of gradient-index optical design using wavefront matching enabled optimization.

    PubMed

    Nagar, Jogender; Brocker, Donovan E; Campbell, Sawyer D; Easum, John A; Werner, Douglas H

    2016-05-02

    This paper proposes a new design paradigm which allows for a modular approach to replacing a homogeneous optical lens system with a higher-performance GRadient-INdex (GRIN) lens system using a WaveFront Matching (WFM) method. In multi-lens GRIN systems, a full-system-optimization approach can be challenging due to the large number of design variables. The proposed WFM design paradigm enables optimization of each component independently by explicitly matching the WaveFront Error (WFE) of the original homogeneous component at the exit pupil, resulting in an efficient design procedure for complex multi-lens systems.

  11. Scalable non-negative matrix tri-factorization.

    PubMed

    Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž

    2017-01-01

    Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.

  12. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  13. Evaluating the performance of a soil moisture data assimilation system for agricultural drought monitoring

    USDA-ARS?s Scientific Manuscript database

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this ...

  14. Systems Thinking for Transformational Change in Health

    ERIC Educational Resources Information Center

    Willis, Cameron D.; Best, Allan; Riley, Barbara; Herbert, Carol P.; Millar, John; Howland, David

    2014-01-01

    Incremental approaches to introducing change in Canada's health systems have not sufficiently improved the quality of services and outcomes. Further progress requires 'large system transformation', considered to be the systematic effort to generate coordinated change across organisations sharing a common vision and goal. This essay draws on…

  15. A Model of Internal Communication in Adaptive Communication Systems.

    ERIC Educational Resources Information Center

    Williams, M. Lee

    A study identified and categorized different types of internal communication systems and developed an applied model of internal communication in adaptive organizational systems. Twenty-one large organizations were selected for their varied missions and diverse approaches to managing internal communication. Individual face-to-face or telephone…

  16. The Reliability and Effectiveness of a Radar-Based Animal Detection System

    DOT National Transportation Integrated Search

    2017-09-22

    This document contains data on the reliability and effectiveness of an animal detection system along U.S. Hwy 95 near Bonners Ferry, Idaho. The system uses a Doppler radar to detect large mammals (e.g., deer and elk) when they approach the highway. T...

  17. The Reliability and Effectiveness of a Radar-Based Animal Detection System

    DOT National Transportation Integrated Search

    2017-09-01

    This document contains data on the reliability and effectiveness of an animal detection system along U.S. Hwy 95 near Bonners Ferry, Idaho. The system uses a Doppler radar to detect large mammals (e.g., deer and elk) when they approach the highway. T...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  19. Mechanical Analyses for coupled Vegetation-Flow System

    NASA Astrophysics Data System (ADS)

    Chen, L.; Acharya, K.; Stone, M.

    2010-12-01

    Vegetation in riparian areas plays important roles in hydrology, geomorphology and ecology in local environment. Mechanical response of the aquatic vegetation to hydraulic forces and its impact on flow hydraulics have received considerable attention due to implications for flood control, habitat restoration, and water resources management. This study aims to advance understanding of the mechanical properties of in-stream vegetation including drag force, moment and stress. Dynamic changes of these properties under various flow conditions largely determine vegetation affected flow field and dynamic resistance with progressive bending, and hydraulic conditions for vegetation failure (rupture or wash-out) thus are critical for understanding the coupled vegetation-flow system. A new approach combining fluid and material mechanics is developed in this study to examine the behavior of both rigid and flexible vegetation. The major advantage of this approach is its capability to treat large deflection (bending) of plants and associated changes of mechanical properties in both vegetation and flow. Starting from simple emergent vegetation, both static and dynamic formulations of the problem are presented and the solutions are compared. Results show the dynamic behavior of a simplified system mimicking complex and real systems, implying the approach is able to disclose the physical essence of the coupled system. The approach is extended to complex vegetation under both submerged and emergent conditions using more realistic representation of biomechanical properties for vegetation.

  20. Adaptive sampling strategies with high-throughput molecular dynamics

    NASA Astrophysics Data System (ADS)

    Clementi, Cecilia

    Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.

  1. Robust control for spacecraft rendezvous system with actuator unsymmetrical saturation: a gain scheduling approach

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Xue, Anke

    2018-06-01

    This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.

  2. Architecture-driven reuse of code in KASE

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    In order to support the synthesis of large, complex software systems, we need to focus on issues pertaining to the architectural design of a system in addition to algorithm and data structure design. An approach that is based on abstracting the architectural design of a set of problems in the form of a generic architecture, and providing tools that can be used to instantiate the generic architecture for specific problem instances is presented. Such an approach also facilitates reuse of code between different systems belonging to the same problem class. An application of our approach on a realistic problem is described; the results of the exercise are presented; and how our approach compares to other work in this area is discussed.

  3. Control design challenges of large space systems and spacecraft control laboratory experiment (SCOLE)

    NASA Technical Reports Server (NTRS)

    Lin, Jiguan Gene

    1987-01-01

    The quick suppression of the structural vibrations excited by bang-bang (BB) type time-optional slew maneuvers via modal-dashpot design of velocity output feedback control was investigated. Simulation studies were conducted, and modal dashpots were designed for the SCOLE flexible body dynamics. A two-stage approach was proposed for rapid slewing and precision pointing/retargeting of large, flexible space systems: (1) slew the whole system like a rigid body in a minimum time under specified limits on the control moments and forces, and (2) damp out the excited structural vibrations afterwards. This approach was found promising. High-power modal/dashpots can suppress very large vibrations, and can add a desirable amount of active damping to modeled modes. Unmodeled modes can also receive some concomitant active damping, as a benefit of spillover. Results also show that not all BB type rapid pointing maneuvers will excite large structural vibrations. When properly selected small forces (e.g., vernier thrusters) are used to complete the specified slew maneuver in the shortest time, even BB-type maneuvers will excite only small vibrations (e.g., 0.3 ft peak deflection for a 130 ft beam).

  4. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  5. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  6. Economic Evaluation of Voice Recognition (VR) for the Clinician’s Desktop at the Naval Hospital Roosevelt Roads

    DTIC Science & Technology

    1997-09-01

    first PC-based, very large vocabulary dictation system with a continuous natural language free flow approach to speech recognition. (This system allows...indicating the likelihood that a particular stored HMM reference model is the best match for the input. This approach is called the Baum-Welch...InfoCentral, and Envoy 1.0; and Lotus Development Corp.’s SmartSuite 3, Approach 3.0, and Organizer. 2. IBM At a press conference in New York in June 1997, IBM

  7. Rethinking and Restructuring an Assessment System via Effective Deployment of Technology

    ERIC Educational Resources Information Center

    Okonkwo, Charity

    2010-01-01

    Every instructional process involves a strategic assessment system for a complete teaching-learning circle. Assessment system which is seriously challenged calls for a change in the approach. The National Open University of Nigeria (NOUN) assessment system at present is challenged. The large number of students and numerous courses offered by NOUN…

  8. Alternative approaches to condition monitoring in freeway management systems.

    DOT National Transportation Integrated Search

    2002-01-01

    In response to growing concerns over traffic congestion, traffic management systems have been built in large urban areas in an effort to improve the efficiency and safety of the transportation network. This research effort developed an automated cond...

  9. Practice and effectiveness of web-based problem-based learning approach in a large class-size system: A comparative study.

    PubMed

    Ding, Yongxia; Zhang, Peili

    2018-06-12

    Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P < 0.01) compared with the control group. In addition, 92.6% of students in the experimental group expressed satisfaction with the new web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.

  10. The Impact of Curriculum Change on Health Sciences First Year Students' Approaches to Learning

    ERIC Educational Resources Information Center

    Walker, Rebecca; Spronken-Smith, Rachel; Bond, Carol; McDonald, Fiona; Reynolds, John; McMartin, Anna

    2010-01-01

    This study aimed to use a learning inventory (the Approaches and Study Skills Inventory for Students, ASSIST) to measure the impact of a curriculum change on students' approaches to learning in two large courses in a health sciences first year programme. The two new Human Body Systems (HUBS) courses were designed to encourage students to take a…

  11. Exploring a QoS Driven Scheduling Approach for Peer-to-Peer Live Streaming Systems with Network Coding

    PubMed Central

    Cui, Laizhong; Lu, Nan; Chen, Fu

    2014-01-01

    Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968

  12. Measurement of motion detection of wireless capsule endoscope inside large intestine.

    PubMed

    Zhou, Mingda; Bao, Guanqun; Pahlavan, Kaveh

    2014-01-01

    Wireless Capsule Endoscope (WCE) provides a noninvasive way to inspect the entire Gastrointestinal (GI) tract, including large intestine, where intestinal diseases most likely occur. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of detected intestinal diseases. Knowing how the capsule moves inside the large intestine would greatly complement the existing wireless localization systems by providing the motion information. Since the most recently released WCE can take up to 6 frames per second, it's possible to estimate the movement of the capsule by processing the successive image sequence. In this paper, a computer vision based approach without utilizing any external device is proposed to estimate the motion of WCE inside the large intestine. The proposed approach estimate the displacement and rotation of the capsule by calculating entropy and mutual information between frames using Fibonacci method. The obtained results of this approach show its stability and better performance over other existing approaches of motion measurements. Meanwhile, findings of this paper lay a foundation for motion pattern of WCEs inside the large intestine, which will benefit other medical applications.

  13. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    PubMed Central

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254

  14. A Review of Enhanced Sampling Approaches for Accelerated Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush; van de Walle, Axel

    Molecular dynamics (MD) simulations have become a tool of immense use and popularity for simulating a variety of systems. With the advent of massively parallel computer resources, one now routinely sees applications of MD to systems as large as hundreds of thousands to even several million atoms, which is almost the size of most nanomaterials. However, it is not yet possible to reach laboratory timescales of milliseconds and beyond with MD simulations. Due to the essentially sequential nature of time, parallel computers have been of limited use in solving this so-called timescale problem. Instead, over the years a large range of statistical mechanics based enhanced sampling approaches have been proposed for accelerating molecular dynamics, and accessing timescales that are well beyond the reach of the fastest computers. In this review we provide an overview of these approaches, including the underlying theory, typical applications, and publicly available software resources to implement them.

  15. Development and analysis of SCR requirements tables for system scenarios

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Morrison, Jeffery L.

    1995-01-01

    We describe the use of scenarios to develop and refine requirement tables for parts of the Earth Observing System Data and Information System (EOSDIS). The National Aeronautics and Space Administration (NASA) is developing EOSDIS as part of its Mission-To-Planet-Earth (MTPE) project to accept instrument/platform observation requests from end-user scientists, schedule and perform requested observations of the Earth from space, collect and process the observed data, and distribute data to scientists and archives. Current requirements for the system are managed with tools that allow developers to trace the relationships between requirements and other development artifacts, including other requirements. In addition, the user community (e.g., earth and atmospheric scientists), in conjunction with NASA, has generated scenarios describing the actions of EOSDIS subsystems in response to user requests and other system activities. As part of a research effort in verification and validation techniques, this paper describes our efforts to develop requirements tables from these scenarios for the EOSDIS Core System (ECS). The tables specify event-driven mode transitions based on techniques developed by the Naval Research Lab's (NRL) Software Cost Reduction (SCR) project. The SCR approach has proven effective in specifying requirements for large systems in an unambiguous, terse format that enhance identification of incomplete and inconsistent requirements. We describe development of SCR tables from user scenarios and identify the strengths and weaknesses of our approach in contrast to the requirements tracing approach. We also evaluate the capabilities of both approach to respond to the volatility of requirements in large, complex systems.

  16. Large-area copper indium diselenide (CIS) process, control and manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillespie, T.J.; Lanning, B.R.; Marshall, C.H.

    1997-12-31

    Lockheed Martin Astronautics (LMA) has developed a large-area (30x30cm) sequential CIS manufacturing approach amenable to low-cost photovoltaics (PV) production. A prototype CIS manufacturing system has been designed and built with compositional uniformity (Cu/In ratio) verified within {+-}4 atomic percent over the 30x30cm area. CIS device efficiencies have been measured by the National Renewable Energy Laboratory (NREL) at 7% on a flexible non-sodium-containing substrate and 10% on a soda-lime-silica (SLS) glass substrate. Critical elements of the manufacturing capability include the CIS sequential process selection, uniform large-area material deposition, and in-situ process control. Details of the process and large-area manufacturing approach aremore » discussed and results presented.« less

  17. Low-cost space-varying FIR filter architecture for computational imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.

    2010-01-01

    Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.

  18. Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach

    PubMed Central

    Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.

    2007-01-01

    Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408

  19. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  20. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  1. The embodied embedded character of system 1 processing.

    PubMed

    Bellini-Leite, Samuel de Castro

    2013-01-01

    In the last thirty years, a relatively large group of cognitive scientists have begun characterising the mind in terms of two distinct, relatively autonomous systems. To account for paradoxes in empirical results of studies mainly on reasoning, Dual Process Theories were developed. Such Dual Process Theories generally agree that System 1 is rapid, automatic, parallel, and heuristic-based and System 2 is slow, capacity-demanding, sequential, and related to consciousness. While System 2 can still be decently understood from a traditional cognitivist approach, I will argue that it is essential for System 1 processing to be comprehended in an Embodied Embedded approach to Cognition.

  2. The Embodied Embedded Character of System 1 Processing

    PubMed Central

    Bellini-Leite, Samuel de Castro

    2013-01-01

    In the last thirty years, a relatively large group of cognitive scientists have begun characterising the mind in terms of two distinct, relatively autonomous systems. To account for paradoxes in empirical results of studies mainly on reasoning, Dual Process Theories were developed. Such Dual Process Theories generally agree that System 1 is rapid, automatic, parallel, and heuristic-based and System 2 is slow, capacity-demanding, sequential, and related to consciousness. While System 2 can still be decently understood from a traditional cognitivist approach, I will argue that it is essential for System 1 processing to be comprehended in an Embodied Embedded approach to Cognition. PMID:23678245

  3. Maxi CAI with a Micro.

    ERIC Educational Resources Information Center

    Gerhold, George; And Others

    This paper describes an effective microprocessor-based CAI system which has been repeatedly tested by a large number of students and edited accordingly. Tasks not suitable for microprocessor based systems (authoring, testing, and debugging) were handled on larger multi-terminal systems. This approach requires that the CAI language used on the…

  4. Control technology development

    NASA Astrophysics Data System (ADS)

    Schaechter, D. B.

    1982-03-01

    The main objectives of the control technology development task are given in the slide below. The first is to develop control design techniques based on flexible structural models, rather than simple rigid-body models. Since large space structures are distributed parameter systems, a new degree of freedom, that of sensor/actuator placement, may be exercised for improving control system performance. Another characteristic of large space structures is numerous oscillatory modes within the control bandwidth. Reduced-order controller design models must be developed which produce stable closed-loop systems when combined with the full-order system. Since the date of an actual large-space-structure flight is rapidly approaching, it is vitally important that theoretical developments are tested in actual hardware. Experimental verification is a vital counterpart of all current theoretical developments.

  5. Systems design and comparative analysis of large antenna concepts

    NASA Technical Reports Server (NTRS)

    Garrett, L. B.; Ferebee, M. J., Jr.

    1983-01-01

    Conceptual designs are evaluated and comparative analyses conducted for several large antenna spacecraft for Land Mobile Satellite System (LMSS) communications missions. Structural configurations include trusses, hoop and column and radial rib. The study was conducted using the Interactive Design and Evaluation of Advanced Spacecraft (IDEAS) system. The current capabilities, development status, and near-term plans for the IDEAS system are reviewed. Overall capabilities are highlighted. IDEAS is an integrated system of computer-aided design and analysis software used to rapidly evaluate system concepts and technology needs for future advanced spacecraft such as large antennas, platforms, and space stations. The system was developed at Langley to meet a need for rapid, cost-effective, labor-saving approaches to the design and analysis of numerous missions and total spacecraft system options under consideration. IDEAS consists of about 40 technical modules efficient executive, data-base and file management software, and interactive graphics display capabilities.

  6. Maximizing User Satisfaction With Office Practice Data Processing Systems

    PubMed Central

    O'Flaherty, Thomas; Jussim, Judith

    1980-01-01

    Significant numbers of physicians are using data processing services and a large number of firms are offering an increasing variety of services. This paper quantifies user dissatisfaction with office practice data processing systems and analyzes factors affecting dissatisfaction in large group practices. Based on this analysis, a proposal is made for a more structured approach to obtaining data processing services in order to lower the risks and increase satisfaction with data processing.

  7. Technology Challenges and Opportunities for Very Large In-Space Structural Systems

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Dorsey, John T.; Watson, Judith J.

    2009-01-01

    Space solar power satellites and other large space systems will require creative and innovative concepts in order to achieve economically viable designs. The mass and volume constraints of current and planned launch vehicles necessitate highly efficient structural systems be developed. In addition, modularity and in-space deployment/construction will be enabling design attributes. While current space systems allocate nearly 20 percent of the mass to the primary structure, the very large space systems of the future must overcome subsystem mass allocations by achieving a level of functional integration not yet realized. A proposed building block approach with two phases is presented to achieve near-term solar power satellite risk reduction with accompanying long-term technology advances. This paper reviews the current challenges of launching and building very large space systems from a structures and materials perspective utilizing recent experience. Promising technology advances anticipated in the coming decades in modularity, material systems, structural concepts, and in-space operations are presented. It is shown that, together, the current challenges and future advances in very large in-space structural systems may provide the technology pull/push necessary to make solar power satellite systems more technically and economically feasible.

  8. The Diversity of School Organizational Configurations

    ERIC Educational Resources Information Center

    Lee, Linda C.

    2013-01-01

    School reform on a large scale has largely been unsuccessful. Approaches designed to document and understand the variety of organizational conditions that comprise our school systems are needed so that reforms can be tailored and results scaled. Therefore, this article develops a configurational framework that allows a systematic analysis of many…

  9. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  10. Large-Angle Magnetic Suspension (LAMS)

    NASA Technical Reports Server (NTRS)

    Oglevie, Ronald E.; Eisenhaure, David B.; Downer, James R.

    1988-01-01

    Spherical LAMS is magnetic syspension that provides dual functions of magnetic bearing and rotorgimbal system. Provides two degrees of angular freedom within single magnetic suspension system. Approach employs spherically-shaped magnetic-gap surfaces to achieve much-larger angular freedom than available from previous suspensions.

  11. Experimental estimation of transmissibility matrices for industrial multi-axis vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Beijen, Michiel A.; Voorhoeve, Robbert; Heertjes, Marcel F.; Oomen, Tom

    2018-07-01

    Vibration isolation is essential for industrial high-precision systems to suppress external disturbances. The aim of this paper is to develop a general identification approach to estimate the frequency response function (FRF) of the transmissibility matrix, which is a key performance indicator for vibration isolation systems. The major challenge lies in obtaining a good signal-to-noise ratio in view of a large system weight. A non-parametric system identification method is proposed that combines floor and shaker excitations. Furthermore, a method is presented to analyze the input power spectrum of the floor excitations, both in terms of magnitude and direction. In turn, the input design of the shaker excitation signals is investigated to obtain sufficient excitation power in all directions with minimum experiment cost. The proposed methods are shown to provide an accurate FRF of the transmissibility matrix in three relevant directions on an industrial active vibration isolation system over a large frequency range. This demonstrates that, despite their heavy weight, industrial vibration isolation systems can be accurately identified using this approach.

  12. A systems approach to the management of large projects: Review of NASA experience with societal implications

    NASA Technical Reports Server (NTRS)

    Vaccaro, M. J.

    1973-01-01

    The application of the NASA type management approach to achieve objectives in other fields is considered. The NASA management outlook and the influences of the NASA environment are discussed along with project organization and management, and applications to socio-economic projects.

  13. A Hierarchic System for Information Usage.

    ERIC Educational Resources Information Center

    Lu, John; Markham, David

    This paper demonstrates an approach which enables one to reduce in a systematic way the immense complexity of a large body of knowledge. This approach provides considerable insight into what is known and unknown in a given academic field by systematically and pragmatically ordering the information. As a case study, the authors selected…

  14. A new multiple air beam approach for in-process form error optical measurement

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Li, R.

    2018-07-01

    In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.

  15. A new decision sciences for complex systems.

    PubMed

    Lempert, Robert J

    2002-05-14

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.

  16. Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics.

    PubMed

    Jurling, Alden S; Fienup, James R

    2014-03-01

    Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.

  17. National Implications for Urban School Systems: Strategic Planning in the Human Resource Management Department in a Large Urban School District

    ERIC Educational Resources Information Center

    Johnson, Clarence; Kritsonis, William Allan

    2007-01-01

    This article addresses several key ongoing issues in a large urban school district. Literature focuses on what make a large urban school district effective in Human Resource Management. The effectiveness is addressed through recruitment and retention practices. A comparison of the school district with current research is the main approach to the…

  18. Large-Scale Brain Systems in ADHD: Beyond the Prefrontal-Striatal Model

    PubMed Central

    Castellanos, F. Xavier; Proal, Erika

    2012-01-01

    Attention-deficit/hyperactivity disorder (ADHD) has long been thought to reflect dysfunction of prefrontal-striatal circuitry, with involvement of other circuits largely ignored. Recent advances in systems neuroscience-based approaches to brain dysfunction enable the development of models of ADHD pathophysiology that encompass a number of different large-scale “resting state” networks. Here we review progress in delineating large-scale neural systems and illustrate their relevance to ADHD. We relate frontoparietal, dorsal attentional, motor, visual, and default networks to the ADHD functional and structural literature. Insights emerging from mapping intrinsic brain connectivity networks provide a potentially mechanistic framework for understanding aspects of ADHD, such as neuropsychological and behavioral inconsistency, and the possible role of primary visual cortex in attentional dysfunction in the disorder. PMID:22169776

  19. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  20. Advancing Risk Assessment through the Application of Systems Toxicology

    PubMed Central

    Sauer, John Michael; Kleensang, André; Peitsch, Manuel C.; Hayes, A. Wallace

    2016-01-01

    Risk assessment is the process of quantifying the probability of a harmful effect to individuals or populations from human activities. Mechanistic approaches to risk assessment have been generally referred to as systems toxicology. Systems toxicology makes use of advanced analytical and computational tools to integrate classical toxicology and quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Three presentations including two case studies involving both in vitro and in vivo approaches described the current state of systems toxicology and the potential for its future application in chemical risk assessment. PMID:26977253

  1. Making automated computer program documentation a feature of total system design

    NASA Technical Reports Server (NTRS)

    Wolf, A. W.

    1970-01-01

    It is pointed out that in large-scale computer software systems, program documents are too often fraught with errors, out of date, poorly written, and sometimes nonexistent in whole or in part. The means are described by which many of these typical system documentation problems were overcome in a large and dynamic software project. A systems approach was employed which encompassed such items as: (1) configuration management; (2) standards and conventions; (3) collection of program information into central data banks; (4) interaction among executive, compiler, central data banks, and configuration management; and (5) automatic documentation. A complete description of the overall system is given.

  2. Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.

    PubMed

    Lijun Long; Jun Zhao

    2017-04-01

    In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.

  3. Automating security monitoring and analysis for Space Station Freedom's electric power system

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Sobajic, Dejan J.; Pao, Yoh-Han

    1990-01-01

    Operating a large, space power system requires classifying the system's status and analyzing its security. Conventional algorithms are used by terrestrial electric utilities to provide such information to their dispatchers, but their application aboard Space Station Freedom will consume too much processing time. A new approach for monitoring and analysis using adaptive pattern techniques is presented. This approach yields an on-line security monitoring and analysis algorithm that is accurate and fast; and thus, it can free the Space Station Freedom's power control computers for other tasks.

  4. Automating security monitoring and analysis for Space Station Freedom's electric power system

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Sobajic, Dejan J.; Pao, Yoh-Han

    1990-01-01

    Operating a large, space power system requires classifying the system's status and analyzing its security. Conventional algorithms are used by terrestrial electric utilities to provide such information to their dispatchers, but their application aboard Space Station Freedom will consume too much processing time. A novel approach for monitoring and analysis using adaptive pattern techniques is presented. This approach yields an on-line security monitoring and analysis algorithm that is accurate and fast; and thus, it can free the Space Station Freedom's power control computers for other tasks.

  5. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  6. Three-dimensional digital holographic aperture synthesis for rapid and highly-accurate large-volume metrology

    NASA Astrophysics Data System (ADS)

    Crouch, Stephen; Kaylor, Brant M.; Barber, Zeb W.; Reibel, Randy R.

    2015-09-01

    Currently large volume, high accuracy three-dimensional (3D) metrology is dominated by laser trackers, which typically utilize a laser scanner and cooperative reflector to estimate points on a given surface. The dependency upon the placement of cooperative targets dramatically inhibits the speed at which metrology can be conducted. To increase speed, laser scanners or structured illumination systems can be used directly on the surface of interest. Both approaches are restricted in their axial and lateral resolution at longer stand-off distances due to the diffraction limit of the optics used. Holographic aperture ladar (HAL) and synthetic aperture ladar (SAL) can enhance the lateral resolution of an imaging system by synthesizing much larger apertures by digitally combining measurements from multiple smaller apertures. Both of these approaches only produce two-dimensional imagery and are therefore not suitable for large volume 3D metrology. We combined the SAL and HAL approaches to create a swept frequency digital holographic 3D imaging system that provides rapid measurement speed for surface coverage with unprecedented axial and lateral resolution at longer standoff ranges. The technique yields a "data cube" of Fourier domain data, which can be processed with a 3D Fourier transform to reveal a 3D estimate of the surface. In this paper, we provide the theoretical background for the technique and show experimental results based on an ultra-wideband frequency modulated continuous wave (FMCW) chirped heterodyne ranging system showing ~100 micron lateral and axial precisions at >2 m standoff distances.

  7. Laureates

    Science.gov Websites

    , multi-scale observing systems under challenging field conditions to document unexpectedly large soil CO2 pleased to recognize the Building Technology and Urban Systems Division's Retro-commissioning Sensor synthetic biology while providing novel approaches for crop engineering to support Berkeley Lab and DOE's

  8. Old Wine, New Bottles.

    ERIC Educational Resources Information Center

    Cibulka, James G.

    2001-01-01

    Discusses school governance systems and describes governance overhauls in two large urban school districts, Washington, D.C., and Baltimore, focusing on role of mayor in improving troubled public education systems. Identifies benefits and shortcomings of approach, concluding mayoral control can make a difference but right combination of…

  9. Existence and control of Legionella bacteria in building water systems: A review.

    PubMed

    Springston, John P; Yocavitch, Liana

    2017-02-01

    Legionellae are waterborne bacteria which are capable of causing potentially fatal Legionnaires' disease (LD), as well as Pontiac Fever. Public concern about Legionella exploded following the 1976 outbreak at the American Legion conference in Philadelphia, where 221 attendees contracted pneumonia and 34 died. Since that time, a variety of different control methods and strategies have been developed and implemented in an effort to eradicate Legionella from building water systems. Despite these efforts, the incidence of LD has been steadily increasing in the U.S. for more than a decade. Public health and occupational hygiene professionals have maintained an active debate regarding best practices for management and control of Legionella. Professional opinion remains divided with respect to the relative merits of performing routine sampling for Legionella, vs. the passive, reactive approach that has been largely embraced by public health officials and facility owners. Given the potential risks and ramifications associated with waiting to assess systems for Legionella until after disease has been identified and confirmed, a proactive approach of periodic testing for Legionella, along with proper water treatment, is the best approach to avoiding large-scale disease outbreaks.

  10. Abasy Atlas: a comprehensive inventory of systems, global network properties and systems-level elements across bacteria

    PubMed Central

    Ibarra-Arellano, Miguel A.; Campos-González, Adrián I.; Treviño-Quintanilla, Luis G.; Tauch, Andreas; Freyre-González, Julio A.

    2016-01-01

    The availability of databases electronically encoding curated regulatory networks and of high-throughput technologies and methods to discover regulatory interactions provides an invaluable source of data to understand the principles underpinning the organization and evolution of these networks responsible for cellular regulation. Nevertheless, data on these sources never goes beyond the regulon level despite the fact that regulatory networks are complex hierarchical-modular structures still challenging our understanding. This brings the necessity for an inventory of systems across a large range of organisms, a key step to rendering feasible comparative systems biology approaches. In this work, we take the first step towards a global understanding of the regulatory networks organization by making a cartography of the functional architectures of diverse bacteria. Abasy (Across-bacteria systems) Atlas provides a comprehensive inventory of annotated functional systems, global network properties and systems-level elements (global regulators, modular genes shaping functional systems, basal machinery genes and intermodular genes) predicted by the natural decomposition approach for reconstructed and meta-curated regulatory networks across a large range of bacteria, including pathogenically and biotechnologically relevant organisms. The meta-curation of regulatory datasets provides the most complete and reliable set of regulatory interactions currently available, which can even be projected into subsets by considering the force or weight of evidence supporting them or the systems that they belong to. Besides, Abasy Atlas provides data enabling large-scale comparative systems biology studies aimed at understanding the common principles and particular lifestyle adaptions of systems across bacteria. Abasy Atlas contains systems and system-level elements for 50 regulatory networks comprising 78 649 regulatory interactions covering 42 bacteria in nine taxa, containing 3708 regulons and 1776 systems. All this brings together a large corpus of data that will surely inspire studies to generate hypothesis regarding the principles governing the evolution and organization of systems and the functional architectures controlling them. Database URL: http://abasy.ccg.unam.mx PMID:27242034

  11. Zero-gravity quantity gaging system

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Zero-Gravity Quantity Gaging System program is a technology development effort funded by NASA-LeRC and contracted by NASA-JSC to develop and evaluate zero-gravity quantity gaging system concepts suitable for application to large, on-orbit cryogenic oxygen and hydrogen tankage. The contract effective date was 28 May 1985. During performance of the program, 18 potential quantity gaging approaches were investigated for their merit and suitability for gaging two-phase cryogenic oxygen and hydrogen in zero-gravity conditions. These approaches were subjected to a comprehensive trade study and selection process, which found that the RF modal quantity gaging approach was the most suitable for both liquid oxygen and liquid hydrogen applications. This selection was made with NASA-JSC concurrence.

  12. Identification of technology options for reducing nitrogen pollution in cropping systems of Pujiang*

    PubMed Central

    Fang, Bin; Wang, Guang-huo; Van den berg, Marrit; Roetter, Reimund

    2005-01-01

    This work analyses the potential role of nitrogen pollution technology of crop systems of Pujiang, County in Eastern China’s Zhejiang Province, rice and vegetables are important cropping systems. We used a case study approach involving comparison of farmer practices and improved technologies. This approach allows assessing the impact of technology on pollution, is forward looking, and can yield information on the potential of on-the-shelf technology and provide opportunities for technology development. The approach particularly suits newly developed rice technologies with large potential of reducing nitrogen pollution and for future rice and vegetables technologies. The results showed that substantial reductions in nitrogen pollution are feasible for both types of crops. PMID:16187411

  13. Identification of technology options for reducing nitrogen pollution in cropping systems of Pujiang.

    PubMed

    Fang, Bin; Wang, Guang-Huo; Van, Den Berg Marrit; Roetter, Reimund

    2005-10-01

    This work analyses the potential role of nitrogen pollution technology of crop systems of Pujiang, County in Eastern China's Zhejiang Province, rice and vegetables are important cropping systems. We used a case study approach involving comparison of farmer practices and improved technologies. This approach allows assessing the impact of technology on pollution, is forward looking, and can yield information on the potential of on-the-shelf technology and provide opportunities for technology development. The approach particularly suits newly developed rice technologies with large potential of reducing nitrogen pollution and for future rice and vegetables technologies. The results showed that substantial reductions in nitrogen pollution are feasible for both types of crops.

  14. Increasing CAD system efficacy for lung texture analysis using a convolutional network

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastian Roberto; Fetita, Catalin; Faccinetto, Alex; Brillet, Pierre-Yves

    2016-03-01

    The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.

  15. Robustness and cognition in stabilization problem of dynamical systems based on asymptotic methods

    NASA Astrophysics Data System (ADS)

    Dubovik, S. A.; Kabanov, A. A.

    2017-01-01

    The problem of synthesis of stabilizing systems based on principles of cognitive (logical-dynamic) control for mobile objects used under uncertain conditions is considered. This direction in control theory is based on the principles of guaranteeing robust synthesis focused on worst-case scenarios of the controlled process. The guaranteeing approach is able to provide functioning of the system with the required quality and reliability only at sufficiently low disturbances and in the absence of large deviations from some regular features of the controlled process. The main tool for the analysis of large deviations and prediction of critical states here is the action functional. After the forecast is built, the choice of anti-crisis control is the supervisory control problem that optimizes the control system in a normal mode and prevents escape of the controlled process in critical states. An essential aspect of the approach presented here is the presence of a two-level (logical-dynamic) control: the input data are used not only for generating of synthesized feedback (local robust synthesis) in advance (off-line), but also to make decisions about the current (on-line) quality of stabilization in the global sense. An example of using the presented approach for the problem of development of the ship tilting prediction system is considered.

  16. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.

    PubMed

    Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio

    2010-03-26

    Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  17. A Unique Large-Scale Undergraduate Research Experience in Molecular Systems Biology for Non-Mathematics Majors

    ERIC Educational Resources Information Center

    Kappler, Ulrike; Rowland, Susan L.; Pedwell, Rhianna K.

    2017-01-01

    Systems biology is frequently taught with an emphasis on mathematical modeling approaches. This focus effectively excludes most biology, biochemistry, and molecular biology students, who are not mathematics majors. The mathematical focus can also present a misleading picture of systems biology, which is a multi-disciplinary pursuit requiring…

  18. The Air Force Advanced Instructional System (AIS): An Overview.

    ERIC Educational Resources Information Center

    Yasutake, Joseph Y.; Stobie, William H.

    The Air Force Advanced Instructional System (AIS) is a prototype computer-based multimedia system for the administration and management of individualized technical training on a large scale. The paper provides an overview of the AIS: (1) its purposes and goals, (2) the background and rationale for the development approach, (3) a basic description…

  19. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Light sheet theta microscopy for rapid high-resolution imaging of large biological samples.

    PubMed

    Migliori, Bianca; Datta, Malika S; Dupre, Christophe; Apak, Mehmet C; Asano, Shoh; Gao, Ruixuan; Boyden, Edward S; Hermanson, Ola; Yuste, Rafael; Tomer, Raju

    2018-05-29

    Advances in tissue clearing and molecular labeling methods are enabling unprecedented optical access to large intact biological systems. These developments fuel the need for high-speed microscopy approaches to image large samples quantitatively and at high resolution. While light sheet microscopy (LSM), with its high planar imaging speed and low photo-bleaching, can be effective, scaling up to larger imaging volumes has been hindered by the use of orthogonal light sheet illumination. To address this fundamental limitation, we have developed light sheet theta microscopy (LSTM), which uniformly illuminates samples from the same side as the detection objective, thereby eliminating limits on lateral dimensions without sacrificing the imaging resolution, depth, and speed. We present a detailed characterization of LSTM, and demonstrate its complementary advantages over LSM for rapid high-resolution quantitative imaging of large intact samples with high uniform quality. The reported LSTM approach is a significant step for the rapid high-resolution quantitative mapping of the structure and function of very large biological systems, such as a clarified thick coronal slab of human brain and uniformly expanded tissues, and also for rapid volumetric calcium imaging of highly motile animals, such as Hydra, undergoing non-isomorphic body shape changes.

  1. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  2. Measuring the embodied energy in drinking water supply systems: a case study in the Great Lakes region.

    PubMed

    Mo, Weiwei; Nasiri, Fuzhan; Eckelman, Matthew J; Zhang, Qiong; Zimmerman, Julie B

    2010-12-15

    A sustainable supply of both energy and water is critical to long-term national security, effective climate policy, natural resource sustainability, and social wellbeing. These two critical resources are inextricably and reciprocally linked; the production of energy requires large volumes of water, while the treatment and distribution of water is also significantly dependent upon energy. In this paper, a hybrid analysis approach is proposed to estimate embodied energy and to perform a structural path analysis of drinking water supply systems. The applicability of this approach is then tested through a case study of a large municipal water utility (city of Kalamazoo) in the Great Lakes region to provide insights on the issues of water-energy pricing and carbon footprints. Kalamazoo drinking water requires approximately 9.2 MJ/m(3) of energy to produce, 30% of which is associated with indirect inputs such as system construction and treatment chemicals.

  3. Automated adaptive inference of phenomenological dynamical models.

    PubMed

    Daniels, Bryan C; Nemenman, Ilya

    2015-08-21

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  4. Motion simulator study of longitudinal stability requirements for large delta wing transport airplanes during approach and landing with stability augmentation systems failed

    NASA Technical Reports Server (NTRS)

    Snyder, C. T.; Fry, E. B.; Drinkwater, F. J., III; Forrest, R. D.; Scott, B. C.; Benefield, T. D.

    1972-01-01

    A ground-based simulator investigation was conducted in preparation for and correlation with an-flight simulator program. The objective of these studies was to define minimum acceptable levels of static longitudinal stability for landing approach following stability augmentation systems failures. The airworthiness authorities are presently attempting to establish the requirements for civil transports with only the backup flight control system operating. Using a baseline configuration representative of a large delta wing transport, 20 different configurations, many representing negative static margins, were assessed by three research test pilots in 33 hours of piloted operation. Verification of the baseline model to be used in the TIFS experiment was provided by computed and piloted comparisons with a well-validated reference airplane simulation. Pilot comments and ratings are included, as well as preliminary tracking performance and workload data.

  5. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  6. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  7. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  8. A Comparative Analysis of Meeting the Whole Child Initiatives through Standardized and Competency-Based Education Systems in Terms of Achievement and Meeting the Whole Child Initiatives: Comparing Professional Perceptions and Identified Measurable Results

    ERIC Educational Resources Information Center

    Ward, Jacqueline M.

    2011-01-01

    Traditional education (TE) largely uses a standardized (SbE) approach while alternatives (nTE) tend to more of a competency (CbE), or student-centered approach. This comparative analysis examines essential aspects of such pedagogies in determining the effectiveness of schooling systems in meeting the Whole Child Initiative (Souza, 1999; Carter et…

  9. Scope of Various Random Number Generators in Ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling Salesman problem, are several quasi and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is just to seek an answer to the controversial performance ranking of the generators in probabilistic/statically sense.

  10. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.

    PubMed

    Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil

    2005-03-01

    We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.

  11. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  12. A universal approach to the study of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Hwa, Rudolph C.

    2004-07-01

    A large variety of nonlinear systems have been treated by a common approach that emphasizes the fluctuation of spatial patterns. By using the same method of analysis it is possible to discuss the chaotic behaviors of quark jets and logistic map in the same language. Critical behaviors of quark-hadron phase transition in heavy-ion collisions and of photon production at the threshold of lasing can also be described by a common scaling behavior. The universal approach also makes possible an insight into the recently discovered phenomenon of wind reversal in cryogenic turbulence as a manifestation of self-organized criticality.

  13. Toward Exposing Timing-Based Probing Attacks in Web Applications †

    PubMed Central

    Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai

    2017-01-01

    Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach. PMID:28245610

  14. Toward Exposing Timing-Based Probing Attacks in Web Applications.

    PubMed

    Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai

    2017-02-25

    Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users' browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.

  15. Fire management over large landscapes: a hierarchical approach

    Treesearch

    Kenneth G. Boykin

    2008-01-01

    Management planning for fires becomes increasingly difficult as scale increases. Stratification provides land managers with multiple scales in which to prepare plans. Using statistical techniques, Geographic Information Systems (GIS), and meetings with land managers, we divided a large landscape of over 2 million acres (White Sands Missile Range) into parcels useful in...

  16. A Strategic Central Approach to Data Collection and Integration: A Case of a Research-Intensive University

    ERIC Educational Resources Information Center

    Nair, Chenicheri Sid; Wayland, Chris; Mertova, Patricie

    2010-01-01

    Large organisations, such as universities or financial institutions, generally have access to substantial volumes of data from various "transactional" sources, such as customer service records, email, or student management systems. There is an ever-increasing demand on these large organisations to make rapid, relevant and efficient…

  17. VALUING ACID MINE DRAINAGE REMEDIATION IN WEST VIRGINIA: A HEDONIC MODELING APPROACH INCORPORATING GEOGRAPHIC INFORMATION SYSTEMS

    EPA Science Inventory

    States with active and abandoned mines face large private and public costs to remediate damage to streams and rivers from acid mine drainage (AMD). Appalachian states have an especially large number of contaminated streams and rivers, and the USGS places AMD as the primary source...

  18. DCL System Using Deep Learning Approaches for Land-Based or Ship-Based Real Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2015-09-30

    Clark (2014), "Using High Performance Computing to Explore Large Complex Bioacoustic Soundscapes : Case Study for Right Whale Acoustics," Procedia...34Using High Performance Computing to Explore Large Complex Bioacoustic Soundscapes : Case Study for Right Whale Acoustics," Procedia Computer Science 20

  19. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  20. Platform options for the Space Station program

    NASA Technical Reports Server (NTRS)

    Mangano, M. J.; Rowley, R. W.

    1986-01-01

    Platforms for polar and 28.5 deg orbits were studied to determine the platform requirements and characteristics necessary to support the science objectives. Large platforms supporting the Earth-Observing System (EOS) were initially studied. Co-orbiting platforms were derived from these designs. Because cost estimates indicated that the large platform approach was likely to be too expensive, require several launches, and generally be excessively complex, studies of small platforms were undertaken. Results of these studies show the small platform approach to be technically feasible at lower overall cost. All designs maximized hardware inheritance from the Space Station program to reduce costs. Science objectives as defined at the time of these studies are largely achievable.

  1. Biologically-inspired approaches for self-organization, adaptation, and collaboration of heterogeneous autonomous systems

    NASA Astrophysics Data System (ADS)

    Steinberg, Marc

    2011-06-01

    This paper presents a selective survey of theoretical and experimental progress in the development of biologicallyinspired approaches for complex surveillance and reconnaissance problems with multiple, heterogeneous autonomous systems. The focus is on approaches that may address ISR problems that can quickly become mathematically intractable or otherwise impractical to implement using traditional optimization techniques as the size and complexity of the problem is increased. These problems require dealing with complex spatiotemporal objectives and constraints at a variety of levels from motion planning to task allocation. There is also a need to ensure solutions are reliable and robust to uncertainty and communications limitations. First, the paper will provide a short introduction to the current state of relevant biological research as relates to collective animal behavior. Second, the paper will describe research on largely decentralized, reactive, or swarm approaches that have been inspired by biological phenomena such as schools of fish, flocks of birds, ant colonies, and insect swarms. Next, the paper will discuss approaches towards more complex organizational and cooperative mechanisms in team and coalition behaviors in order to provide mission coverage of large, complex areas. Relevant team behavior may be derived from recent advances in understanding of the social and cooperative behaviors used for collaboration by tens of animals with higher-level cognitive abilities such as mammals and birds. Finally, the paper will briefly discuss challenges involved in user interaction with these types of systems.

  2. A Tensor-Based Structural Damage Identification and Severity Assessment

    PubMed Central

    Anaissi, Ali; Makki Alamdari, Mehrisadat; Rakotoarivelo, Thierry; Khoa, Nguyen Lu Dang

    2018-01-01

    Early damage detection is critical for a large set of global ageing infrastructure. Structural Health Monitoring systems provide a sensor-based quantitative and objective approach to continuously monitor these structures, as opposed to traditional engineering visual inspection. Analysing these sensed data is one of the major Structural Health Monitoring (SHM) challenges. This paper presents a novel algorithm to detect and assess damage in structures such as bridges. This method applies tensor analysis for data fusion and feature extraction, and further uses one-class support vector machine on this feature to detect anomalies, i.e., structural damage. To evaluate this approach, we collected acceleration data from a sensor-based SHM system, which we deployed on a real bridge and on a laboratory specimen. The results show that our tensor method outperforms a state-of-the-art approach using the wavelet energy spectrum of the measured data. In the specimen case, our approach succeeded in detecting 92.5% of induced damage cases, as opposed to 61.1% for the wavelet-based approach. While our method was applied to bridges, its algorithm and computation can be used on other structures or sensor-data analysis problems, which involve large series of correlated data from multiple sensors. PMID:29301314

  3. Reverse engineering biomolecular systems using -omic data: challenges, progress and opportunities.

    PubMed

    Quo, Chang F; Kaddi, Chanchala; Phan, John H; Zollanvari, Amin; Xu, Mingqing; Wang, May D; Alterovitz, Gil

    2012-07-01

    Recent advances in high-throughput biotechnologies have led to the rapid growing research interest in reverse engineering of biomolecular systems (REBMS). 'Data-driven' approaches, i.e. data mining, can be used to extract patterns from large volumes of biochemical data at molecular-level resolution while 'design-driven' approaches, i.e. systems modeling, can be used to simulate emergent system properties. Consequently, both data- and design-driven approaches applied to -omic data may lead to novel insights in reverse engineering biological systems that could not be expected before using low-throughput platforms. However, there exist several challenges in this fast growing field of reverse engineering biomolecular systems: (i) to integrate heterogeneous biochemical data for data mining, (ii) to combine top-down and bottom-up approaches for systems modeling and (iii) to validate system models experimentally. In addition to reviewing progress made by the community and opportunities encountered in addressing these challenges, we explore the emerging field of synthetic biology, which is an exciting approach to validate and analyze theoretical system models directly through experimental synthesis, i.e. analysis-by-synthesis. The ultimate goal is to address the present and future challenges in reverse engineering biomolecular systems (REBMS) using integrated workflow of data mining, systems modeling and synthetic biology.

  4. Thermospheric dynamics - A system theory approach

    NASA Technical Reports Server (NTRS)

    Codrescu, M.; Forbes, J. M.; Roble, R. G.

    1990-01-01

    A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.

  5. Evaluating Action Learning: A Critical Realist Complex Network Theory Approach

    ERIC Educational Resources Information Center

    Burgoyne, John G.

    2010-01-01

    This largely theoretical paper will argue the case for the usefulness of applying network and complex adaptive systems theory to an understanding of action learning and the challenge it is evaluating. This approach, it will be argued, is particularly helpful in the context of improving capability in dealing with wicked problems spread around…

  6. Graduate Development, Discursive Resources and the Employment Relationship at BAE Systems

    ERIC Educational Resources Information Center

    Jenner, Shirley

    2008-01-01

    Purpose: The purpose of this paper is to examine the evolution of an employee opinion survey and to evaluate its impact on the graduate training programme and associated employment relationships. Design/methodology/approach: The paper provides a detailed, longitudinal case study of one large-scale UK organisation. The approach recognises that…

  7. Scalable process for mitigation of laser-damaged potassium dihydrogen phosphate crystal optic surfaces with removal of damaged antireflective coating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elhadj, S.; Steele, W. A.; VanBlarcom, D. S.

    Here, we investigate an approach for the recycling of laser-damaged large-aperture deuterated potassium dihydrogen phosphate (DKDP) crystals used for optical switching (KDP) and for frequency conversion (DKDP) in megajoule-class high-power laser systems. The approach consists of micromachining the surface laser damage sites (mitigation), combined with multiple soaks and ultrasonication steps in a coating solvent to remove, synergistically, both the highly adherent machining debris and the laser-damage-affected antireflection coating. We then identify features of the laser-damage-affected coating, such as the “solvent-persistent” coating and the “burned-in” coating, that are difficult to remove by conventional approaches without damaging the surface. We also providemore » a solution to the erosion problem identified in this work when colloidal coatings are processed during ultrasonication. Finally, we provide a proof of principle of the approach by testing the full process that includes laser damage mitigation of DKDP test parts, coat stripping, reapplication of a new antireflective coat, and a laser damage test demonstrating performance up to at least 12 J/cm 2 at UV wavelengths, which is well above current requirements. Our approach ultimately provides a potential path to a scalable recycling loop for the management of optics in large, high-power laser systems that can reduce cost and extend lifetime of highly valuable and difficult to grow large DKDP crystals.« less

  8. Scalable process for mitigation of laser-damaged potassium dihydrogen phosphate crystal optic surfaces with removal of damaged antireflective coating

    DOE PAGES

    Elhadj, S.; Steele, W. A.; VanBlarcom, D. S.; ...

    2017-03-07

    Here, we investigate an approach for the recycling of laser-damaged large-aperture deuterated potassium dihydrogen phosphate (DKDP) crystals used for optical switching (KDP) and for frequency conversion (DKDP) in megajoule-class high-power laser systems. The approach consists of micromachining the surface laser damage sites (mitigation), combined with multiple soaks and ultrasonication steps in a coating solvent to remove, synergistically, both the highly adherent machining debris and the laser-damage-affected antireflection coating. We then identify features of the laser-damage-affected coating, such as the “solvent-persistent” coating and the “burned-in” coating, that are difficult to remove by conventional approaches without damaging the surface. We also providemore » a solution to the erosion problem identified in this work when colloidal coatings are processed during ultrasonication. Finally, we provide a proof of principle of the approach by testing the full process that includes laser damage mitigation of DKDP test parts, coat stripping, reapplication of a new antireflective coat, and a laser damage test demonstrating performance up to at least 12 J/cm 2 at UV wavelengths, which is well above current requirements. Our approach ultimately provides a potential path to a scalable recycling loop for the management of optics in large, high-power laser systems that can reduce cost and extend lifetime of highly valuable and difficult to grow large DKDP crystals.« less

  9. Performance model-directed data sieving for high-performance I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how tomore » perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.« less

  10. Stochastic Simulation of Biomolecular Networks in Dynamic Environments

    PubMed Central

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.

    2016-01-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512

  11. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  12. A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Hwa-Soo

    1991-01-01

    A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.

  13. A SPATIALLY EXPLICIT HIERARCHICAL APPROACH TO MODELING COMPLEX ECOLOGICAL SYSTEMS: THEORY AND APPLICATIONS. (R827676)

    EPA Science Inventory

    Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...

  14. STREET SURFACE STORAGE FOR CONTROL OF COMBINED SEWER SURCHARGE

    EPA Science Inventory

    One type of Best Management Practices (BMPs) available is the use of street storage systems to prevent combined sewer surcharging and to mitigate basement flooding. A case study approach, based primarily on two largely implemented street storage systems, will be used to explain ...

  15. Calculating hyperfine couplings in large ionic crystals containing hundreds of QM atoms: subsystem DFT is the key.

    PubMed

    Kevorkyants, Ruslan; Wang, Xiqiao; Close, David M; Pavanello, Michele

    2013-11-14

    We present an application of the linear scaling frozen density embedding (FDE) formulation of subsystem DFT to the calculation of isotropic hyperfine coupling constants (hfcc's) of atoms belonging to a guanine radical cation embedded in a guanine hydrochloride monohydrate crystal. The model systems range from an isolated guanine to a 15,000 atom QM/MM cluster where the QM region is comprised of 36 protonated guanine cations, 36 chlorine anions, and 42 water molecules. Our calculations show that the embedding effects of the surrounding crystal cannot be reproduced by small model systems nor by a pure QM/MM procedure. Instead, a large QM region is needed to fully capture the complicated nature of the embedding effects in this system. The unprecedented system size for a relativistic all-electron isotropic hfcc calculation can be approached in this work because the local nature of the electronic structure of the organic crystals considered is fully captured by the FDE approach.

  16. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  17. Teaching ERP Systems: A Multi-Perspective View on the ERP System Market

    ERIC Educational Resources Information Center

    Winkelmann, Axel; Leyh, Christian

    2010-01-01

    In order to increase the diversity in IS education, we discuss an approach for teaching medium-sized ERP systems in university courses. Many of today's IS curricula are biased toward a few large ERP packages. Nevertheless, these ERP systems are only a part of the ERP market. Therefore, this paper describes a course outline for an additional course…

  18. Cleared for Launch - Lessons Learned from the OSIRIS-REx System Requirements Verification Program

    NASA Technical Reports Server (NTRS)

    Stevens, Craig; Adams, Angela; Williams, Bradley; Goodloe, Colby

    2017-01-01

    Requirements verification of a large flight system is a challenge. It is especially challenging for engineers taking on their first role in space systems engineering. This paper describes our approach to verification of the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) system requirements. It also captures lessons learned along the way from developing systems engineers embroiled in this process. We begin with an overview of the mission and science objectives as well as the project requirements verification program strategy. A description of the requirements flow down is presented including our implementation for managing the thousands of program and element level requirements and associated verification data. We discuss both successes and methods to improve the managing of this data across multiple organizational interfaces. Our approach to verifying system requirements at multiple levels of assembly is presented using examples from our work at instrument, spacecraft, and ground segment levels. We include a discussion of system end-to-end testing limitations and their impacts to the verification program. Finally, we describe lessons learned that are applicable to all emerging space systems engineers using our unique perspectives across multiple organizations of a large NASA program.

  19. Integrating pixel- and polygon-based approaches to wildfire risk assessment: Application to a high-value watershed on the Pike and San Isabel National Forests, Colorado, USA

    Treesearch

    Matthew P. Thompson; Julie W. Gilbertson-Day; Joe H. Scott

    2015-01-01

    We develop a novel risk assessment approach that integrates complementary, yet distinct, spatial modeling approaches currently used in wildfire risk assessment. Motivation for this work stems largely from limitations of existing stochastic wildfire simulation systems, which can generate pixel-based outputs of fire behavior as well as polygon-based outputs of simulated...

  20. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  1. Time to "go large" on biofilm research: advantages of an omics approach.

    PubMed

    Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J

    2009-04-01

    In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.

  2. An intelligent decomposition approach for efficient design of non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1992-01-01

    The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.

  3. A System for Sentiment Analysis of Colloquial Arabic Using Human Computation

    PubMed Central

    Al-Subaihin, Afnan S.; Al-Khalifa, Hend S.

    2014-01-01

    We present the implementation and evaluation of a sentiment analysis system that is conducted over Arabic text with evaluative content. Our system is broken into two different components. The first component is a game that enables users to annotate large corpuses of text in a fun manner. The game produces necessary linguistic resources that will be used by the second component which is the sentimental analyzer. Two different algorithms have been designed to employ these linguistic resources to analyze text and classify it according to its sentimental polarity. The first approach is using sentimental tag patterns, which reached a precision level of 56.14%. The second approach is the sentimental majority approach which relies on calculating the number of negative and positive phrases in the sentence and classifying the sentence according to the dominant polarity. The results after evaluating the system for the first sentimental majority approach yielded the highest accuracy level reached by our system which is 60.5% while the second variation scored an accuracy of 60.32%. PMID:24892064

  4. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  5. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  6. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  7. Navigating the currents of seascape genomics: how spatial analyses can augment population genomic studies

    PubMed Central

    Crandall, Eric D.; Liggins, Libby; Bongaerts, Pim; Treml, Eric A.

    2016-01-01

    Population genomic approaches are making rapid inroads in the study of non-model organisms, including marine taxa. To date, these marine studies have predominantly focused on rudimentary metrics describing the spatial and environmental context of their study region (e.g., geographical distance, average sea surface temperature, average salinity). We contend that a more nuanced and considered approach to quantifying seascape dynamics and patterns can strengthen population genomic investigations and help identify spatial, temporal, and environmental factors associated with differing selective regimes or demographic histories. Nevertheless, approaches for quantifying marine landscapes are complicated. Characteristic features of the marine environment, including pelagic living in flowing water (experienced by most marine taxa at some point in their life cycle), require a well-designed spatial-temporal sampling strategy and analysis. Many genetic summary statistics used to describe populations may be inappropriate for marine species with large population sizes, large species ranges, stochastic recruitment, and asymmetrical gene flow. Finally, statistical approaches for testing associations between seascapes and population genomic patterns are still maturing with no single approach able to capture all relevant considerations. None of these issues are completely unique to marine systems and therefore similar issues and solutions will be shared for many organisms regardless of habitat. Here, we outline goals and spatial approaches for landscape genomics with an emphasis on marine systems and review the growing empirical literature on seascape genomics. We review established tools and approaches and highlight promising new strategies to overcome select issues including a strategy to spatially optimize sampling. Despite the many challenges, we argue that marine systems may be especially well suited for identifying candidate genomic regions under environmentally mediated selection and that seascape genomic approaches are especially useful for identifying robust locus-by-environment associations. PMID:29491947

  8. Navigating the currents of seascape genomics: how spatial analyses can augment population genomic studies.

    PubMed

    Riginos, Cynthia; Crandall, Eric D; Liggins, Libby; Bongaerts, Pim; Treml, Eric A

    2016-12-01

    Population genomic approaches are making rapid inroads in the study of non-model organisms, including marine taxa. To date, these marine studies have predominantly focused on rudimentary metrics describing the spatial and environmental context of their study region (e.g., geographical distance, average sea surface temperature, average salinity). We contend that a more nuanced and considered approach to quantifying seascape dynamics and patterns can strengthen population genomic investigations and help identify spatial, temporal, and environmental factors associated with differing selective regimes or demographic histories. Nevertheless, approaches for quantifying marine landscapes are complicated. Characteristic features of the marine environment, including pelagic living in flowing water (experienced by most marine taxa at some point in their life cycle), require a well-designed spatial-temporal sampling strategy and analysis. Many genetic summary statistics used to describe populations may be inappropriate for marine species with large population sizes, large species ranges, stochastic recruitment, and asymmetrical gene flow. Finally, statistical approaches for testing associations between seascapes and population genomic patterns are still maturing with no single approach able to capture all relevant considerations. None of these issues are completely unique to marine systems and therefore similar issues and solutions will be shared for many organisms regardless of habitat. Here, we outline goals and spatial approaches for landscape genomics with an emphasis on marine systems and review the growing empirical literature on seascape genomics. We review established tools and approaches and highlight promising new strategies to overcome select issues including a strategy to spatially optimize sampling. Despite the many challenges, we argue that marine systems may be especially well suited for identifying candidate genomic regions under environmentally mediated selection and that seascape genomic approaches are especially useful for identifying robust locus-by-environment associations.

  9. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    PubMed

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  10. Windvan laser study

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The goal of defining a CO2 laser transmitter approach suited to Shuttle Coherent Atmospheric Lidar Experiment (SCALE) requirements is discussed. The adaptation of the existing WINDVAN system to the shuttle environment is addressed. The size, weight, reliability, and efficiency of the existing WINDVAN system are largely compatible with SCALE requirements. Repacking is needed for compatibility with vacuum and thermal environments. Changes are required to ensure survival through launch and landing, mechanical, vibration, and acoustic loads. Existing WINDVAN thermal management approaches depending on convection need to be upgraded zero gravity operations.

  11. Systems biology and livestock production.

    PubMed

    Headon, D

    2013-12-01

    The mapping of complete sets of genes, transcripts and proteins from many organisms has prompted the development of new '-omic' technologies for collecting and analysing very large amounts of data. Now that the tools to generate and interrogate such complete data sets are widely used, much of the focus of biological research has begun to turn towards understanding systems as a whole, rather than studying their components in isolation. This very broadly defined systems approach is being deployed across a range of problems and scales of organisation, including many aspects of the animal sciences. Here I review selected examples of this systems approach as applied to poultry and livestock production, product quality and welfare.

  12. Tensor methodology and computational geometry in direct computational experiments in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia

    2017-07-01

    The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.

  13. Enhanced conformational sampling via novel variable transformations and very large time-step molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tuckerman, Mark

    2006-03-01

    One of the computational grand challenge problems is to develop methodology capable of sampling conformational equilibria in systems with rough energy landscapes. If met, many important problems, most notably protein folding, could be significantly impacted. In this talk, two new approaches for addressing this problem will be presented. First, it will be shown how molecular dynamics can be combined with a novel variable transformation designed to warp configuration space in such a way that barriers are reduced and attractive basins stretched. This method rigorously preserves equilibrium properties while leading to very large enhancements in sampling efficiency. Extensions of this approach to the calculation/exploration of free energy surfaces will be discussed. Next, a new very large time-step molecular dynamics method will be introduced that overcomes the resonances which plague many molecular dynamics algorithms. The performance of the methods is demonstrated on a variety of systems including liquid water, long polymer chains simple protein models, and oligopeptides.

  14. Guide for preparing active solar heating systems operation and maintenance manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-01-01

    This book presents a systematic and standardized approach to the preparation of operation and maintenance manuals for active solar heating systems. Provides an industry consensus of the best operating and maintenance procedures for large commercial-scale solar service water and space heating systems. A sample O M manual is included. 3-ring binder included.

  15. Cost-effectiveness analysis of a system-based approach for managing neonatal jaundice and preventing kernicterus in Ontario.

    PubMed

    Xie, Bin; da Silva, Orlando; Zaric, Greg

    2012-01-01

    To evaluate the incremental cost-effectiveness of a system-based approach for the management of neonatal jaundice and the prevention of kernicterus in term and late-preterm (≥35 weeks) infants, compared with the traditional practice based on visual inspection and selected bilirubin testing. Two hypothetical cohorts of 150,000 term and late-preterm neonates were used to compare the costs and outcomes associated with the use of a system-based or traditional practice approach. Data for the evaluation were obtained from the case costing centre at a large teaching hospital in Ontario, supplemented by data from the literature. The per child cost for the system-based approach cohort was $176, compared with $173 in the traditional practice cohort. The higher cost associated with the system-based cohort reflects increased costs for predischarge screening and treatment and increased postdischarge follow-up visits. These costs are partially offset by reduced costs from fewer emergency room visits, hospital readmissions and kernicterus cases. Compared with the traditional approach, the cost to prevent one kernicterus case using the system-based approach was $570,496, the cost per life year gained was $26,279, and the cost per quality-adjusted life year gained was $65,698. The cost to prevent one kernicterus case using the system-based approach is much lower than previously reported in the literature.

  16. Cost-effectiveness analysis of a system-based approach for managing neonatal jaundice and preventing kernicterus in Ontario

    PubMed Central

    Xie, Bin; da Silva, Orlando; Zaric, Greg

    2012-01-01

    OBJECTIVE: To evaluate the incremental cost-effectiveness of a system-based approach for the management of neonatal jaundice and the prevention of kernicterus in term and late-preterm (≥35 weeks) infants, compared with the traditional practice based on visual inspection and selected bilirubin testing. STUDY DESIGN: Two hypothetical cohorts of 150,000 term and late-preterm neonates were used to compare the costs and outcomes associated with the use of a system-based or traditional practice approach. Data for the evaluation were obtained from the case costing centre at a large teaching hospital in Ontario, supplemented by data from the literature. RESULTS: The per child cost for the system-based approach cohort was $176, compared with $173 in the traditional practice cohort. The higher cost associated with the system-based cohort reflects increased costs for predischarge screening and treatment and increased postdischarge follow-up visits. These costs are partially offset by reduced costs from fewer emergency room visits, hospital readmissions and kernicterus cases. Compared with the traditional approach, the cost to prevent one kernicterus case using the system-based approach was $570,496, the cost per life year gained was $26,279, and the cost per quality-adjusted life year gained was $65,698. CONCLUSION: The cost to prevent one kernicterus case using the system-based approach is much lower than previously reported in the literature. PMID:23277747

  17. Thermal management of batteries

    NASA Astrophysics Data System (ADS)

    Gibbard, H. F.; Chen, C.-C.

    Control of the internal temperature during high rate discharge or charge can be a major design problem for large, high energy density battery systems. A systematic approach to the thermal management of such systems is described for different load profiles based on: thermodynamic calculations of internal heat generation; calorimetric measurements of heat flux; analytical and finite difference calculations of the internal temperature distribution; appropriate system designs for heat removal and temperature control. Examples are presented of thermal studies on large lead-acid batteries for electrical utility load levelling and nickel-zinc and lithium-iron sulphide batteries for electric vehicle propulsion.

  18. Iris indexing based on local intensity order pattern

    NASA Astrophysics Data System (ADS)

    Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo

    2017-03-01

    In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.

  19. Joint nonlinearity effects in the design of a flexible truss structure control system

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1986-01-01

    Nonlinear effects are introduced in the dynamics of large space truss structures by the connecting joints which are designed with rather important tolerances to facilitate the assembly of the structures in space. The purpose was to develop means to investigate the nonlinear dynamics of the structures, particularly the limit cycles that might occur when active control is applied to the structures. An analytical method was sought and derived to predict the occurrence of limit cycles and to determine their stability. This method is mainly based on the quasi-linearization of every joint using describing functions. This approach was proven successful when simple dynamical systems were tested. Its applicability to larger systems depends on the amount of computations it requires, and estimates of the computational task tend to indicate that the number of individual sources of nonlinearity should be limited. Alternate analytical approaches, which do not account for every single nonlinearity, or the simulation of a simplified model of the dynamical system should, therefore, be investigated to determine a more effective way to predict limit cycles in large dynamical systems with an important number of distributed nonlinearities.

  20. A Visual Analytics Approach for Station-Based Air Quality Data

    PubMed Central

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-01-01

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117

  1. A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong

    Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memory technologies. To highlight theirmore » similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.« less

  2. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  3. Two-tier Haddon matrix approach to fault analysis of accidents and cybernetic search for relationship to effect operational control: a case study at a large construction site.

    PubMed

    Mazumdar, Atmadeep; Sen, Krishna Nirmalya; Lahiri, Balendra Nath

    2007-01-01

    The Haddon matrix is a potential tool for recognizing hazards in any operating engineering system. This paper presents a case study of operational hazards at a large construction site. The fish bone structure helps to visualize and relate the chain of events, which led to the failure of the system. The two-tier Haddon matrix approach helps to analyze the problem and subsequently prescribes preventive steps. The cybernetic approach has been undertaken to establish the relationship among event variables and to identify the ones with most potential. Those event variables in this case study, based on the cybernetic concepts like control responsiveness and controllability salience, are (a) uncontrolled swing of sheet contributing to energy, (b) slippage of sheet from anchor, (c) restricted longitudinal and transverse swing or rotation about the suspension, (d) guilt or uncertainty of the crane driver, (e) safe working practices and environment.

  4. A Visual Analytics Approach for Station-Based Air Quality Data.

    PubMed

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  5. Teaching Medium-Sized ERP Systems - A Problem-Based Learning Approach

    NASA Astrophysics Data System (ADS)

    Winkelmann, Axel; Matzner, Martin

    In order to increase the diversity in IS education, we discuss an approach for teaching medium-sized ERP systems in master courses. Many of today's IS curricula are biased toward large ERP packages. Nevertheless, these ERP systems are only a part of the ERP market. Hence, this chapter describes a course outline for a course on medium-sized ERP systems. Students had to study, analyze, and compare five different ERP systems during a semester. The chapter introduces a procedure model and scenario for setting up similar courses at other universities. Furthermore, it describes some of the students' outcomes and evaluates the contribution of the course with regard to a practical but also academic IS education.

  6. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    PubMed

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  7. A new technology perspective and engineering tools approach for large, complex and distributed mission and safety critical systems components

    NASA Technical Reports Server (NTRS)

    Carrio, Miguel A., Jr.

    1988-01-01

    Rapidly emerging technology and methodologies have out-paced the systems development processes' ability to use them effectively, if at all. At the same time, the tools used to build systems are becoming obsolescent themselves as a consequence of the same technology lag that plagues systems development. The net result is that systems development activities have not been able to take advantage of available technology and have become equally dependent on aging and ineffective computer-aided engineering tools. New methods and tools approaches are essential if the demands of non-stop and Mission and Safety Critical (MASC) components are to be met.

  8. OCLC at OSU: The Effect of the Adoption of OCLC on the Management of Technical Services at a Large Academic Library

    ERIC Educational Resources Information Center

    Gapen, D. Kaye; Morita, Ichiko T.

    1978-01-01

    The organizational and procedural changes necessary to adapt the OCLC system in a large academic library over a five year period are described. Organization and work flow diagrams are presented in demonstrating new approaches to accessing authority files and the shelf-list. (Author)

  9. Enter the machine

    NASA Astrophysics Data System (ADS)

    Palittapongarnpim, Pantita; Sanders, Barry C.

    2018-05-01

    Quantum tomography infers quantum states from measurement data, but it becomes infeasible for large systems. Machine learning enables tomography of highly entangled many-body states and suggests a new powerful approach to this problem.

  10. Anticipating the emergence of infectious diseases

    PubMed Central

    Drake, John M.; Rohani, Pejman

    2017-01-01

    In spite of medical breakthroughs, the emergence of pathogens continues to pose threats to both human and animal populations. We present candidate approaches for anticipating disease emergence prior to large-scale outbreaks. Through use of ideas from the theories of dynamical systems and stochastic processes we develop approaches which are not specific to a particular disease system or model, but instead have general applicability. The indicators of disease emergence detailed in this paper can be classified into two parallel approaches: a set of early-warning signals based around the theory of critical slowing down and a likelihood-based approach. To test the reliability of these two approaches we contrast theoretical predictions with simulated data. We find good support for our methods across a range of different model structures and parameter values. PMID:28679666

  11. Direct heuristic dynamic programming for damping oscillations in a large power system.

    PubMed

    Lu, Chao; Si, Jennie; Xie, Xiaorong

    2008-08-01

    This paper applies a neural-network-based approximate dynamic programming method, namely, the direct heuristic dynamic programming (direct HDP), to a large power system stability control problem. The direct HDP is a learning- and approximation-based approach to addressing nonlinear coordinated control under uncertainty. One of the major design parameters, the controller learning objective function, is formulated to directly account for network-wide low-frequency oscillation with the presence of nonlinearity, uncertainty, and coupling effect among system components. Results include a novel learning control structure based on the direct HDP with applications to two power system problems. The first case involves static var compensator supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a difficult complex system challenge by providing a new solution to a large interconnected power network oscillation damping control problem that frequently occurs in the China Southern Power Grid.

  12. On data modeling for neurological application

    NASA Astrophysics Data System (ADS)

    Woźniak, Karol; Mulawka, Jan

    The aim of this paper is to design and implement information system containing large database dedicated to support neurological-psychiatric examinations focused on human brain after stroke. This approach encompasses the following steps: analysis of software requirements, presentation of the problem solving concept, design and implementation of the final information system. Certain experiments were performed in order to verify the correctness of the project ideas. The approach can be considered as an interdisciplinary venture. Elaboration of the system architecture, data model and the tools supporting medical examinations are provided. The achievement of the design goals is demonstrated in the final conclusion.

  13. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  14. Friction Stir Welding of Large Scale Cryogenic Tanks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Russell, Carolyn; Ding, R. Jeffrey

    1998-01-01

    The Marshall Space Flight Center (MSFC) has established a facility for the joining of large-scale aluminum cryogenic propellant tanks using the friction stir welding process. Longitudinal welds, approximately five meters in length, have been made by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and travel system will be described in this presentation along with process controls and real-time data acquisition developed for this application. The approach to retrofitting other large welding tools at MSFC with the friction stir welding process will also be discussed.

  15. Copying of holograms by spot scanning approach.

    PubMed

    Okui, Makoto; Wakunami, Koki; Oi, Ryutaro; Ichihashi, Yasuyuki; Jackin, Boaz Jessie; Yamamoto, Kenji

    2018-05-20

    To replicate holograms, contact copying has conventionally been used. In this approach, a photosensitive material is fixed together with a master hologram and illuminated with a coherent beam. This method is simple and enables high-quality copies; however, it requires a large optical setup for large-area holograms. In this paper, we present a new method of replicating holograms that uses a relatively compact optical system even for the replication of large holograms. A small laser spot that irradiates only part of the hologram is used to reproduce the hologram by scanning the spot over the whole area of the hologram. We report on the results of experiments carried out to confirm the copy quality, along with a guide to design scanning conditions. The results show the potential effectiveness of the large-area hologram replication technology using a relatively compact apparatus.

  16. Feedback, Goal Setting, and Incentives Effects on Organizational Productivity.

    ERIC Educational Resources Information Center

    Pritchard, Robert D.; And Others

    This technical paper is one of three produced by a large-scale effort aimed at implementing a new approach to measuring productivity, and using that approach to assess the impact of feedback, goal setting, and incentives on productivity. The productivity measurement system was developed for five units in the maintenance and supply areas at an Air…

  17. CONSTITUENCY IN A SYSTEMIC DESCRIPTION OF THE ENGLISH CLAUSE.

    ERIC Educational Resources Information Center

    HUDSON, R.A.

    TWO WAYS OF DESCRIBING CLAUSES IN ENGLISH ARE DISCUSSED IN THIS PAPER. THE FIRST, TERMED THE "FEW-IC'S" APPROACH, IS A SEGMENTATION OF THE CLAUSE INTO A SMALL NUMBER OF IMMEDIATE CONSTITUENTS WHICH REQUIRE A LARGE NUMBER OF FURTHER SEGMENTATIONS BEFORE THE ULTIMATE CONSTITUENTS ARE REACHED. THE SECOND, "MANY-IC'S" APPROACH, IS A SEGMENTATION INTO…

  18. Complexity, Robustness, and Multistability in Network Systems with Switching Topologies: A Hierarchical Hybrid Control Approach

    DTIC Science & Technology

    2015-05-22

    sensor networks for managing power levels of wireless networks ; air and ground transportation systems for air traffic control and payload transport and... network systems, large-scale systems, adaptive control, discontinuous systems 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF...cover a broad spectrum of ap- plications including cooperative control of unmanned air vehicles, autonomous underwater vehicles, distributed sensor

  19. A New Approach to Design of an optimized Grid Tied Smart Solar Photovoltaic (PV) System

    NASA Astrophysics Data System (ADS)

    Farhad, M. Mehedi; Ali, M. Mohammad; Iqbal, M. Asif; Islam, N. Nahar; Ashraf, N.

    2012-11-01

    Energy is the key element for the economical development of a country. With the increasing concern about the global demand for Renewable Energy (RE) energy, it is very much important to reduce the cost of the whole solar photovoltaic (PV) system. Still now most of the solar photovoltaic (PV) system is highly expensive. In this paper we have shown that grid tied solar system can be developed by omitting the energy storage device like large capacity battery bank. It will not only reduce the internallosses for charging and discharging of battery bank but also at the same time a large amount of cost of the battery will be reduced. So, the system maintenance cost will be reduced also. We have proposed a new approach to design a photovoltaic (PV) solar power system which can be operated by feeding the solar power to the national grid along with the residential load. Again if there is an extra power demand for residential load along with the solar power then this system can also provide an opportunity to consume the power from the national grid. The total system is controlled with the help of some the sensors and a microcontroller. As a whole a significant reduction in the system costs and efficient system performance can be realized.

  20. Geometric quantification of features in large flow fields.

    PubMed

    Kendall, Wesley; Huang, Jian; Peterka, Tom

    2012-01-01

    Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.

  1. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    NASA Astrophysics Data System (ADS)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  2. Large-N kinetic theory for highly occupied systems

    NASA Astrophysics Data System (ADS)

    Walz, R.; Boguslavski, K.; Berges, J.

    2018-06-01

    We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.

  3. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  4. A Historical-Materialist Critique of Intercultural Communication Instruction

    ERIC Educational Resources Information Center

    Keshishian, Flora

    2005-01-01

    Using an interdisciplinary approach and historical-materialist perspective, this article argues that economic systems largely influence and maintain the operation of all cultural activity. It contends that today's world has become increasingly dependent on a global economy dominated by capitalism--a profit-driven system--that has come to influence…

  5. Bridging Developmental Systems Theory and Evolutionary Psychology Using Dynamic Optimization

    ERIC Educational Resources Information Center

    Frankenhuis, Willem E.; Panchanathan, Karthik; Clark Barrett, H.

    2013-01-01

    Interactions between evolutionary psychologists and developmental systems theorists have been largely antagonistic. This is unfortunate because potential synergies between the two approaches remain unexplored. This article presents a method that may help to bridge the divide, and that has proven fruitful in biology: dynamic optimization. Dynamic…

  6. Enabling the Interoperability of Large-Scale Legacy Systems

    DTIC Science & Technology

    2008-01-01

    information retrieval systems ( Salton and McGill 1983). We use this method because, in the schema mapping task, only one instance per class is...2001). A survey of approaches to automatic schema matching. The VLDB Journal, 10, 334-350. Salton , G., & McGill, M.J. (1983). Introduction to

  7. Creating Shared Instructional Products: An Alternative Approach to Improving Teaching

    ERIC Educational Resources Information Center

    Morris, Anne K.; Hiebert, James

    2011-01-01

    To solve two enduring problems in education--unacceptably large variation in learning opportunities for students across classrooms and little continuing improvement in the quality of instruction--the authors propose a system that centers on the creation of shared instructional products that guide classroom teaching. By examining systems outside…

  8. A distributed finite-element modeling and control approach for large flexible structures

    NASA Technical Reports Server (NTRS)

    Young, K. D.

    1989-01-01

    An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.

  9. Enabling Cross-Platform Clinical Decision Support through Web-Based Decision Support in Commercial Electronic Health Record Systems: Proposal and Evaluation of Initial Prototype Implementations

    PubMed Central

    Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku

    2013-01-01

    Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426

  10. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  11. Modeling and simulation of reliability of unmanned intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Dixit, Arati M.; Mustapha, Adam; Singh, Kuldip; Aggarwal, K. K.; Gerhart, Grant R.

    2008-04-01

    Unmanned ground vehicles have a large number of scientific, military and commercial applications. A convoy of such vehicles can have collaboration and coordination. For the movement of such a convoy, it is important to predict the reliability of the system. A number of approaches are available in literature which describes the techniques for determining the reliability of the system. Graph theoretic approaches are popular in determining terminal reliability and system reliability. In this paper we propose to exploit Fuzzy and Neuro-Fuzzy approaches for predicting the node and branch reliability of the system while Boolean algebra approaches are used to determine terminal reliability and system reliability. Hence a combination of intelligent approaches like Fuzzy, Neuro-Fuzzy and Boolean approaches is used to predict the overall system reliability of a convoy of vehicles. The node reliabilities may correspond to the collaboration of vehicles while branch reliabilities will determine the terminal reliabilities between different nodes. An algorithm is proposed for determining the system reliabilities of a convoy of vehicles. The simulation of the overall system is proposed. Such simulation should be helpful to the commander to take an appropriate action depending on the predicted reliability in different terrain and environmental conditions. It is hoped that results of this paper will lead to more important techniques to have a reliable convoy of vehicles in a battlefield.

  12. OVMS-plus at the LBT: disturbance compensation simplified

    NASA Astrophysics Data System (ADS)

    Böhm, Michael; Pott, Jörg-Uwe; Borelli, José; Hinz, Phil; Defrère, Denis; Downey, Elwood; Hill, John; Summers, Kellee; Conrad, Al; Kürster, Martin; Herbst, Tom; Sawodny, Oliver

    2016-07-01

    In this paper we will briefly revisit the optical vibration measurement system (OVMS) at the Large Binocular Telescope (LBT) and how these values are used for disturbance compensation and particularly for the LBT Interferometer (LBTI) and the LBT Interferometric Camera for Near-Infrared and Visible Adaptive Interferometry for Astronomy (LINC-NIRVANA). We present the now centralized software architecture, called OVMS+, on which our approach is based and illustrate several challenges faced during the implementation phase. Finally, we will present measurement results from LBTI proving the effectiveness of the approach and the ability to compensate for a large fraction of the telescope induced vibrations.

  13. Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.

    PubMed

    Korth, Martin

    2013-10-14

    The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.

  14. Strategic partnering to improve community health worker programming and performance: features of a community-health system integrated approach.

    PubMed

    Naimoli, Joseph F; Perry, Henry B; Townsend, John W; Frymus, Diana E; McCaffery, James A

    2015-09-01

    There is robust evidence that community health workers (CHWs) in low- and middle-income (LMIC) countries can improve their clients' health and well-being. The evidence on proven strategies to enhance and sustain CHW performance at scale, however, is limited. Nevertheless, CHW stakeholders need guidance and new ideas, which can emerge from the recognition that CHWs function at the intersection of two dynamic, overlapping systems - the formal health system and the community. Although each typically supports CHWs, their support is not necessarily strategic, collaborative or coordinated. We explore a strategic community health system partnership as one approach to improving CHW programming and performance in countries with or intending to mount large-scale CHW programmes. To identify the components of the approach, we drew on a year-long evidence synthesis exercise on CHW performance, synthesis records, author consultations, documentation on large-scale CHW programmes published after the synthesis and other relevant literature. We also established inclusion and exclusion criteria for the components we considered. We examined as well the challenges and opportunities associated with implementing each component. We identified a minimum package of four strategies that provide opportunities for increased cooperation between communities and health systems and address traditional weaknesses in large-scale CHW programmes, and for which implementation is feasible at sub-national levels over large geographic areas and among vulnerable populations in the greatest need of care. We postulate that the CHW performance benefits resulting from the simultaneous implementation of all four strategies could outweigh those that either the health system or community could produce independently. The strategies are (1) joint ownership and design of CHW programmes, (2) collaborative supervision and constructive feedback, (3) a balanced package of incentives, and (4) a practical monitoring system incorporating data from communities and the health system. We believe that strategic partnership between communities and health systems on a minimum package of simultaneously implemented strategies offers the potential for accelerating progress in improving CHW performance at scale. Comparative, retrospective and prospective research can confirm the potential of these strategies. More experience and evidence on strategic partnership can contribute to our understanding of how to achieve sustainable progress in health with equity.

  15. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  16. DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.

    PubMed

    Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond

    2016-01-01

    To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO).

  17. DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware

    PubMed Central

    Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin

    2016-01-01

    To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312

  18. Sharing the responsibility for driver distraction across road transport systems: a systems approach to the management of distracted driving.

    PubMed

    Young, Kristie L; Salmon, Paul M

    2015-01-01

    Distracted driving is acknowledged universally as a large and growing road safety problem. Compounding the problem is that distracted driving is a complex, multifaceted issue influenced by a multitude of factors, organisations and individuals. As such, management of the problem is not straightforward. Numerous countermeasures have been developed and implemented across the globe. The vast majority of these measures have derived from the traditional reductionist, driver-centric approach to distraction and have failed to fully reflect the complex mix of actors and components that give rise to drivers becoming distracted. An alternative approach that is gaining momentum in road safety is the systems approach, which considers all components of the system and their interactions as an integrated whole. In this paper, we review the current knowledge base on driver distraction and argue that the systems approach is not currently being realised in practice. Adopting a more holistic, systems approach to distracted driving will not only improve existing knowledge and interventions from the traditional approach, but will enhance our understanding and management of distraction by considering the complex relationships and interactions of the multiple actors and the myriad sources, enablers and interventions that make up the distracted driving system. It is only by recognising and understanding how all of the system components work together to enable distraction to occur, that we can start to work on solutions to help mitigate the occurrence and consequences of distracted driving. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  20. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  1. Large-area photogrammetry based testing of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul

    2017-03-01

    An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.

  2. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  3. eHealth provides a novel opportunity to exploit the advantages of the Nordic countries in psychiatric genetic research, building on the public health care system, biobanks, and registries.

    PubMed

    Andreassen, Ole A

    2017-07-07

    Nordic countries have played an important role in the recent progress in psychiatric genetics, both with large well-characterized samples and expertise. The Nordic countries have research advantages due to the organization of their societies, including system of personal identifiers, national health registries with information about diseases, treatment and prescriptions, and a public health system with geographical catchment areas. For psychiatric genetic research, the large biobanks and population surveys are a unique added value. Further, the population is motivated to participate in research, and there is a trust in the institutions of the society. These factors have been important for Nordic contributions to biomedical research, and particularly psychiatric genetics. In the era of eHealth, the situation seems even more advantageous for Nordic countries. The system with public health care makes it easy to implement national measures, and most of the Nordic health care sector is already based on electronic information. The potential advantages regarding informed consent, large scale recruitment and follow-up, and longitudinal cohort studies are tremendous. New precision medicine approaches can be tested within the health care system, with an integrated approach, using large hospitals or regions of the country as a test beds. However, data protection and legal framework have to be clarified. In order to succeed, it is important to keep the people's trust, and maintain the high ethical standards and systems for secure data management. Then the full potential of the Nordic countries can be leveraged in the new era of precision medicine including psychiatric genetics. © 2017 Wiley Periodicals, Inc.

  4. LOX/hydrocarbon auxiliary propulsion system study

    NASA Technical Reports Server (NTRS)

    Orton, G. F.; Mark, T. D.; Weber, D. D.

    1982-01-01

    Liquid oxygen/hydrocarbon propulsion systems applicable to a second generation orbiter OMS/RCS were compared, and major system/component options were evaluated. A large number of propellant combinations and system concepts were evaluated. The ground rules were defined in terms of candidate propellants, system/component design options, and design requirements. System and engine component math models were incorporated into existing computer codes for system evaluations. The detailed system evaluations and comparisons were performed to identify the recommended propellant combination and system approach.

  5. Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrosiano, John; Newkirk, Ryan; Mc Donald, Mark P

    2010-12-03

    The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk ofmore » intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and corrunodity flow models are instantiated, a risk consequence analysis can be performed by injecting contaminant at chosen points in the system and propagating the event through the overarching system to arrive at morbidity and mortality figures. A generic chocolate snack cake model, consisting of fluid milk, liquid eggs, and cocoa, is described as an intended proof of concept for multi-ingredient food systems. We aim for an eventual tool that can be used directly by policy makers and planners.« less

  6. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.

  7. Overview of magnetic suspension research at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Groom, Nelson J.

    1992-01-01

    An overview of research in small- and large-gap magnetic suspension systems at LaRC is presented. The overview is limited to systems which have been built as laboratory models or engineering models. Small-gap systems applications include the Annular Momentum Control Device (AMCD), which is a momentum storage device for the stabilization and control of spacecraft, and the Annular Suspension and Pointing System (ASPS), which is a general purpose pointing mount designed to provide orientation, mechanical isolation, and fine pointing space experiments. These devices are described and control and linearization approaches for the magnetic suspension systems for these devices are discussed. Large-gap systems applications at LaRC have been almost exclusively wind tunnel magnetic suspension systems. A brief description of these efforts is also presented.

  8. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.

  9. Adaptive structures for precision controlled large space systems

    NASA Technical Reports Server (NTRS)

    Garba, John A.; Wada, Ben K.; Fanson, James L.

    1991-01-01

    The stringent accuracy and ground test validation requirements of some of the future space missions will require new approaches in structural design. Adaptive structures, structural systems that can vary their geometric congiguration as well as their physical properties, are primary candidates for meeting the functional requirements for such missions. Research performed in the development of such adaptive structural systems is described.

  10. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  11. A Survey of Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan

    1999-01-01

    This chapter presents the science of "COllective INtelligence" (COIN). A COIN is a large multi-agent systems where: i) the agents each run reinforcement learning (RL) algorithms; ii) there is little to no centralized communication or control; iii) there is a provided world utility function that, rates the possible histories of tile full system. Tile conventional approach to designing large distributed systems to optimize a world utility does not use agents running RL algorithms. Rather that approach begins with explicit modeling of the overall system's dynamics, followed by detailed hand-tuning of the interactions between the components to ensure that they "cooperate" as far as the world utility is concerned. This approach is labor-intensive, often results in highly non-robust systems, and usually results in design techniques that, have limited applicability. In contrast, with COINs we wish to solve the system design problems implicitly, via the 'adaptive' character of the RL algorithms of each of the agents. This COIN approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, or Braess's paradox? Although still very young, the science of COINs has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur's "El Farol bar problem". It is expected that as it matures not only will COIN science expand greatly the range of tasks addressable by human engineers, but it will also provide much insight into already established scientific fields, such as economics, game theory, or population biology.

  12. Theoretical approaches to the steady-state statistical physics of interacting dissipative units

    NASA Astrophysics Data System (ADS)

    Bertin, Eric

    2017-02-01

    The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker-Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.

  13. Optimization and Control of Agent-Based Models in Biology: A Perspective.

    PubMed

    An, G; Fitzpatrick, B G; Christley, S; Federico, P; Kanarek, A; Neilan, R Miller; Oremland, M; Salinas, R; Laubenbacher, R; Lenhart, S

    2017-01-01

    Agent-based models (ABMs) have become an increasingly important mode of inquiry for the life sciences. They are particularly valuable for systems that are not understood well enough to build an equation-based model. These advantages, however, are counterbalanced by the difficulty of analyzing and using ABMs, due to the lack of the type of mathematical tools available for more traditional models, which leaves simulation as the primary approach. As models become large, simulation becomes challenging. This paper proposes a novel approach to two mathematical aspects of ABMs, optimization and control, and it presents a few first steps outlining how one might carry out this approach. Rather than viewing the ABM as a model, it is to be viewed as a surrogate for the actual system. For a given optimization or control problem (which may change over time), the surrogate system is modeled instead, using data from the ABM and a modeling framework for which ready-made mathematical tools exist, such as differential equations, or for which control strategies can explored more easily. Once the optimization problem is solved for the model of the surrogate, it is then lifted to the surrogate and tested. The final step is to lift the optimization solution from the surrogate system to the actual system. This program is illustrated with published work, using two relatively simple ABMs as a demonstration, Sugarscape and a consumer-resource ABM. Specific techniques discussed include dimension reduction and approximation of an ABM by difference equations as well systems of PDEs, related to certain specific control objectives. This demonstration illustrates the very challenging mathematical problems that need to be solved before this approach can be realistically applied to complex and large ABMs, current and future. The paper outlines a research program to address them.

  14. Gray-Box Approach for Thermal Modelling of Buildings for Applications in District Heating and Cooling Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saurav, Kumar; Chandan, Vikas

    District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less

  15. Organic field effect transistor with ultra high amplification

    NASA Astrophysics Data System (ADS)

    Torricelli, Fabrizio

    2016-09-01

    High-gain transistors are essential for the large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show organic transistors fabricated on plastic foils enabling unipolar amplifiers with ultra-gain. The proposed approach is general and opens up new opportunities for ultra-large signal amplification in organic circuits and sensors.

  16. Simulating Metabolism with Statistical Thermodynamics

    PubMed Central

    Cannon, William R.

    2014-01-01

    New methods are needed for large scale modeling of metabolism that predict metabolite levels and characterize the thermodynamics of individual reactions and pathways. Current approaches use either kinetic simulations, which are difficult to extend to large networks of reactions because of the need for rate constants, or flux-based methods, which have a large number of feasible solutions because they are unconstrained by the law of mass action. This report presents an alternative modeling approach based on statistical thermodynamics. The principles of this approach are demonstrated using a simple set of coupled reactions, and then the system is characterized with respect to the changes in energy, entropy, free energy, and entropy production. Finally, the physical and biochemical insights that this approach can provide for metabolism are demonstrated by application to the tricarboxylic acid (TCA) cycle of Escherichia coli. The reaction and pathway thermodynamics are evaluated and predictions are made regarding changes in concentration of TCA cycle intermediates due to 10- and 100-fold changes in the ratio of NAD+:NADH concentrations. Finally, the assumptions and caveats regarding the use of statistical thermodynamics to model non-equilibrium reactions are discussed. PMID:25089525

  17. Simulating metabolism with statistical thermodynamics.

    PubMed

    Cannon, William R

    2014-01-01

    New methods are needed for large scale modeling of metabolism that predict metabolite levels and characterize the thermodynamics of individual reactions and pathways. Current approaches use either kinetic simulations, which are difficult to extend to large networks of reactions because of the need for rate constants, or flux-based methods, which have a large number of feasible solutions because they are unconstrained by the law of mass action. This report presents an alternative modeling approach based on statistical thermodynamics. The principles of this approach are demonstrated using a simple set of coupled reactions, and then the system is characterized with respect to the changes in energy, entropy, free energy, and entropy production. Finally, the physical and biochemical insights that this approach can provide for metabolism are demonstrated by application to the tricarboxylic acid (TCA) cycle of Escherichia coli. The reaction and pathway thermodynamics are evaluated and predictions are made regarding changes in concentration of TCA cycle intermediates due to 10- and 100-fold changes in the ratio of NAD+:NADH concentrations. Finally, the assumptions and caveats regarding the use of statistical thermodynamics to model non-equilibrium reactions are discussed.

  18. A KPI-based process monitoring and fault detection framework for large-scale processes.

    PubMed

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Agrobacterium and Tumor Induction: A Model System.

    ERIC Educational Resources Information Center

    Lennox, John E.

    1980-01-01

    The author offers laboratory procedures for experiments using the bacterium, Agrobacterium tumefaciens, which causes crown gall disease in a large number of plants. Three different approaches to growing a culture are given. (SA)

  20. Initial experience with endoscopic side cutting aspiration system in pure neuroendoscopic excision of large intraventricular tumors.

    PubMed

    Mohanty, Aaron; Thompson, Bobbye Jo; Patterson, Joel

    2013-11-01

    Conventionally, neuroendoscopic excision of intraventricular tumors has been difficult and time consuming because of lack of an effective decompression system that can be used through the working channel of the endoscope. The authors report their initial experience in purely endoscopic excision of large intraventricular tumors with the minimally invasive NICO Myriad system. The NICO Myriad is a side cutting soft tissue aspiration system that uses an inner reciprocating cannula and an outer stationary sheath with a side port. During decompression, applied suction approximates the tumor into the lumen of the outer sheath, with the inner cannula excising the tissue by oscillation of the cutting edge. The tumor is then removed by aspiration through the inner sheath. Three patients with large intraventricular tumors were operated by a purely endoscopic approach using a GAAB rigid endoscope and the NICO Myriad system. Of these, two had intraventricular craniopharyngiomas and one had a lateral ventricular subependymoma. The tumor size varied between 1.9 and 4.5 cm in the largest diameter. A relatively firm and solid tumor was encountered in two and a multicystic tumor with thick adherent walls in one. The tumor could be subtotally removed in one and near totally in two. There were no long-term complications. The NICO Myriad is a highly effective tumor decompression system that can be effectively used in a purely endoscopic approach to intraventricular lesions. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Application of high level wavefunction methods in quantum mechanics/molecular mechanics hybrid schemes.

    PubMed

    Mata, Ricardo A

    2010-05-21

    In this Perspective, several developments in the field of quantum mechanics/molecular mechanics (QM/MM) approaches are reviewed. Emphasis is placed on the use of correlated wavefunction theory and new state of the art methods for the treatment of large quantum systems. Until recently, computational chemistry approaches to large/complex chemical problems have seldom been considered as tools for quantitative predictions. However, due to the tremendous development of computational resources and new quantum chemical methods, it is nowadays possible to describe the electronic structure of biomolecules at levels of theory which a decade ago were only possible for system sizes of up to 20 atoms. These advances are here outlined in the context of QM/MM. The article concludes with a short outlook on upcoming developments and possible bottlenecks for future applications.

  2. Design and Control of Large Collections of Learning Agents

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian

    2001-01-01

    The intelligent control of multiple autonomous agents is an important yet difficult task. Previous methods used to address this problem have proved to be either too brittle, too hard to use, or not scalable to large systems. The 'Collective Intelligence' project at NASA/Ames provides an elegant, machine-learning approach to address these problems. This approach mathematically defines some essential properties that a reward system should have to promote coordinated behavior among reinforcement learners. This work has focused on creating additional key properties and algorithms within the mathematics of the Collective Intelligence framework. One of the additions will allow agents to learn more quickly, in a more coordinated manner. The other will let agents learn with less knowledge of their environment. These additions will allow the framework to be applied more easily, to a much larger domain of multi-agent problems.

  3. Querying databases of trajectories of differential equations: Data structures for trajectories

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1989-01-01

    One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.

  4. Delay Assessment Framework for Automated Question-Answering System: An Approach for eLearning Paradigm

    ERIC Educational Resources Information Center

    Iqbal, Muhammad Munwar; Saleem, Yasir

    2017-01-01

    Adoption of Electronic Learning (eLearning) for the dissemination of higher education is rapidly increasing day by day. A large number of universities offering hundreds of course and a large number of the students are taking advantage from this type of learning paradigm. The purpose of this study is to investigate the delay factor in answering the…

  5. Mission planning for large microwave radiometers

    NASA Technical Reports Server (NTRS)

    Schartel, W. A.

    1984-01-01

    Earth orbiting, remote sensing platforms that use microwave radiometers as sensors are susceptible to data interpretation difficulties. The capability of the large microwave radiometer (LMR) was augmented with the inclusion of auxillary sensors that expand and enhance the LMR capability. The final system configuration demonstrates a holistic approach in the design of future orbiting remote sensing platforms that use a LMR as the core instrument.

  6. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox.

  7. Theoretical comparison and experimental test of the secular and nonperturbative approaches on the ESR lineshapes of randomly oriented, anisotropic systems undergoing internal motion

    NASA Astrophysics Data System (ADS)

    Benetis, N. P.; Sjöqvist, L.; Lund, A.; Maruani, J.

    The nuclear Zeeman and the electronic nonsecular parts of the spin Hamiltonian complicate the ESR lineshape of exchanging anisotropic spin systems by introducing, at high field, "forbidden" transitions and, at low field, additional shift and splitting. We compare the nonperturbative with the secular approach for such systems. The exchange is treated within the Kaplan-Alexander limit and both A and g tensors are included, resulting in spectrum asymmetry, in contrast to previous separate treatments. The two approaches are then used to simulate the powder spectrum of OCH 2COO - and compare the results to experimental spectra of an irradiated powder of ZnAc. The powder X-band spectra simulations using the secular approach appear to be accurate. For both the low-field (20 to 200 G) and the high-field (Q-band) regions, however, the nonsecular part of the electronic term and the nuclear Zeeman term, respectively, cannot be neglected. On the other hand, the approximate approach is much faster and consequently more appropriate for treating large, multisite exchanging systems.

  8. Decentralized state estimation for a large-scale spatially interconnected system.

    PubMed

    Liu, Huabo; Yu, Haisheng

    2018-03-01

    A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Stroke Treatment Academic Industry Roundtable

    PubMed Central

    Jovin, Tudor G.; Albers, Gregory W.; Liebeskind, David S.

    2017-01-01

    Background and Purpose The STAIR (Stroke Treatment Academic Industry Roundtable) meeting aims to advance acute stroke therapy development through collaboration between academia, industry, and regulatory institutions. In pursuit of this goal and building on recently available level I evidence of benefit from endovascular therapy (ET) in large vessel occlusion stroke, STAIR IX consensus recommendations were developed that outline priorities for future research in ET. Methods Three key directions for advancing the field were identified: (1) development of systems of care for ET in large vessel occlusion stroke, (2) development of therapeutic approaches adjunctive to ET, and (3) exploring clinical benefit of ET in patient population insufficiently studied in recent trials. Methodological issues such as optimal trial design and outcome measures have also been addressed. Results Development of systems of care strategies should be geared both toward ensuring broad access to ET for eligible patients and toward shortening time to reperfusion to the minimum possible. Adjunctive therapy development includes neuroprotective approaches, adjuvant microcirculatory/collateral enhancing strategies, and periprocedural management. Future research priorities seeking to expand the eligible patient population are to determine benefit of ET in patients presenting beyond conventional time windows, in patients with large baseline ischemic core lesions, and in other important subgroups. Conclusions Research priorities in ET for large vessel occlusion stroke are to improve systems of care, investigate effective adjuvant therapies, and explore whether patient eligibility could be expanded. PMID:27586682

  10. How Robust Is Your Project? From Local Failures to Global Catastrophes: A Complex Networks Approach to Project Systemic Risk.

    PubMed

    Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders

    2015-01-01

    Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.

  11. Examining the Effect of Time Constraint on the Online Mastery Learning Approach towards Improving Postgraduate Students' Achievement

    ERIC Educational Resources Information Center

    Ee, Mong Shan; Yeoh, William; Boo, Yee Ling; Boulter, Terry

    2018-01-01

    Time control plays a critical role within the online mastery learning (OML) approach. This paper examines the two commonly implemented mastery learning strategies--personalised system of instructions and learning for mastery (LFM)--by focusing on what occurs when there is an instructional time constraint. Using a large data set from a postgraduate…

  12. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning.

    PubMed

    Custers, Eugène J F M

    2013-08-01

    Recently, human reasoning, problem solving, and decision making have been viewed as products of two separate systems: "System 1," the unconscious, intuitive, or nonanalytic system, and "System 2," the conscious, analytic, or reflective system. This view has penetrated the medical education literature, yet the idea of two independent dichotomous cognitive systems is not entirely without problems.This article outlines the difficulties of this "two-system view" and presents an alternative, developed by K.R. Hammond and colleagues, called cognitive continuum theory (CCT). CCT is featured by three key assumptions. First, human reasoning, problem solving, and decision making can be arranged on a cognitive continuum, with pure intuition at one end, pure analysis at the other, and a large middle ground called "quasirationality." Second, the nature and requirements of the cognitive task, as perceived by the person performing the task, determine to a large extent whether a task will be approached more intuitively or more analytically. Third, for optimal task performance, this approach needs to match the cognitive properties and requirements of the task. Finally, the author makes a case that CCT is better able than a two-system view to describe medical problem solving and clinical reasoning and that it provides clear clues for how to organize training in clinical reasoning.

  13. A systems-based partnership learning model for strengthening primary healthcare

    PubMed Central

    2013-01-01

    Background Strengthening primary healthcare systems is vital to improving health outcomes and reducing inequity. However, there are few tools and models available in published literature showing how primary care system strengthening can be achieved on a large scale. Challenges to strengthening primary healthcare (PHC) systems include the dispersion, diversity and relative independence of primary care providers; the scope and complexity of PHC; limited infrastructure available to support population health approaches; and the generally poor and fragmented state of PHC information systems. Drawing on concepts of comprehensive PHC, integrated quality improvement (IQI) methods, system-based research networks, and system-based participatory action research, we describe a learning model for strengthening PHC that addresses these challenges. We describe the evolution of this model within the Australian Aboriginal and Torres Strait Islander primary healthcare context, successes and challenges in its application, and key issues for further research. Discussion IQI approaches combined with system-based participatory action research and system-based research networks offer potential to support program implementation and ongoing learning across a wide scope of primary healthcare practice and on a large scale. The Partnership Learning Model (PLM) can be seen as an integrated model for large-scale knowledge translation across the scope of priority aspects of PHC. With appropriate engagement of relevant stakeholders, the model may be applicable to a wide range of settings. In IQI, and in the PLM specifically, there is a clear role for research in contributing to refining and evaluating existing tools and processes, and in developing and trialling innovations. Achieving an appropriate balance between funding IQI activity as part of routine service delivery and funding IQI related research will be vital to developing and sustaining this type of PLM. Summary This paper draws together several different previously described concepts and extends the understanding of how PHC systems can be strengthened through systematic and partnership-based approaches. We describe a model developed from these concepts and its application in the Australian Indigenous primary healthcare context, and raise questions about sustainability and wider relevance of the model. PMID:24344640

  14. A large-scale clinical validation of an integrated monitoring system in the emergency department.

    PubMed

    Clifton, David A; Wong, David; Clifton, Lei; Wilson, Sarah; Way, Rob; Pullinger, Richard; Tarassenko, Lionel

    2013-07-01

    We consider an integrated patient monitoring system, combining electronic patient records with high-rate acquisition of patient physiological data. There remain many challenges in increasing the robustness of "e-health" applications to a level at which they are clinically useful, particularly in the use of automated algorithms used to detect and cope with artifact in data contained within the electronic patient record, and in analyzing and communicating the resultant data for reporting to clinicians. There is a consequential "plague of pilots," in which engineering prototype systems do not enter into clinical use. This paper describes an approach in which, for the first time, the Emergency Department (ED) of a major research hospital has adopted such systems for use during a large clinical trial. We describe the disadvantages of existing evaluation metrics when applied to such large trials, and propose a solution suitable for large-scale validation. We demonstrate that machine learning technologies embedded within healthcare information systems can provide clinical benefit, with the potential to improve patient outcomes in the busy environment of a major ED and other high-dependence areas of patient care.

  15. An iterated cubature unscented Kalman filter for large-DoF systems identification with noisy data

    NASA Astrophysics Data System (ADS)

    Ghorbani, Esmaeil; Cha, Young-Jin

    2018-04-01

    Structural and mechanical system identification under dynamic loading has been an important research topic over the last three or four decades. Many Kalman-filtering-based approaches have been developed for linear and nonlinear systems. For example, to predict nonlinear systems, an unscented Kalman filter was applied. However, from extensive literature reviews, the unscented Kalman filter still showed weak performance on systems with large degrees of freedom. In this research, a modified unscented Kalman filter is proposed by integration of a cubature Kalman filter to improve the system identification performance of systems with large degrees of freedom. The novelty of this work lies on conjugating the unscented transform with the cubature integration concept to find a more accurate output from the transformation of the state vector and its related covariance matrix. To evaluate the proposed method, three different numerical models (i.e., the single degree-of-freedom Bouc-Wen model, the linear 3-degrees-of-freedom system, and the 10-degrees-of-freedom system) are investigated. To evaluate the robustness of the proposed method, high levels of noise in the measured response data are considered. The results show that the proposed method is significantly superior to the traditional UKF for noisy measured data in systems with large degrees of freedom.

  16. Beyond the Joint: The Role of Central Nervous System Reorganizations in Chronic Musculoskeletal Disorders.

    PubMed

    Roy, Jean-Sébastien; Bouyer, Laurent J; Langevin, Pierre; Mercier, Catherine

    2017-11-01

    To a large extent, management of musculoskeletal disorders has traditionally focused on structural dysfunctions found within the musculoskeletal system, mainly around the affected joint. While a structural-dysfunction approach may be effective for musculoskeletal conditions in some populations, especially in acute presentations, its effectiveness remains limited in patients with recurrent or chronic musculoskeletal pain. Numerous studies have shown that the human central nervous system can undergo plastic reorganizations following musculoskeletal disorders; however, they can be maladaptive and contribute to altered joint control and chronic pain. In this Viewpoint, the authors argue that to improve rehabilitation outcomes in patients with chronic musculoskeletal pain, a global view of the disorder that incorporates both central (neural) and peripheral (joint-level) changes is needed. The authors also discuss the challenge of evaluating and rehabilitating central changes and the need for large, high-level studies to evaluate approaches incorporating central and peripheral changes and emerging therapies. J Orthop Sports Phys Ther 2017;47(11):817-821. doi:10.2519/jospt.2017.0608.

  17. Computation as the mechanistic bridge between precision medicine and systems therapeutics.

    PubMed

    Hansen, J; Iyengar, R

    2013-01-01

    Over the past 50 years, like molecular cell biology, medicine and pharmacology have been driven by a reductionist approach. The focus on individual genes and cellular components as disease loci and drug targets has been a necessary step in understanding the basic mechanisms underlying tissue/organ physiology and drug action. Recent progress in genomics and proteomics, as well as advances in other technologies that enable large-scale data gathering and computational approaches, is providing new knowledge of both normal and disease states. Systems-biology approaches enable integration of knowledge from different types of data for precision medicine and systems therapeutics. In this review, we describe recent studies that contribute to these emerging fields and discuss how together these fields can lead to a mechanism-based therapy for individual patients.

  18. Computation as the Mechanistic Bridge Between Precision Medicine and Systems Therapeutics

    PubMed Central

    Hansen, J; Iyengar, R

    2014-01-01

    Over the past 50 years, like molecular cell biology, medicine and pharmacology have been driven by a reductionist approach. The focus on individual genes and cellular components as disease loci and drug targets has been a necessary step in understanding the basic mechanisms underlying tissue/organ physiology and drug action. Recent progress in genomics and proteomics, as well as advances in other technologies that enable large-scale data gathering and computational approaches, is providing new knowledge of both normal and disease states. Systems-biology approaches enable integration of knowledge from different types of data for precision medicine and systems therapeutics. In this review, we describe recent studies that contribute to these emerging fields and discuss how together these fields can lead to a mechanism-based therapy for individual patients. PMID:23212109

  19. Sustainable urban water systems in rich and poor cities--steps towards a new approach.

    PubMed

    Newman, P

    2001-01-01

    The 'big pipes in, big pipes out' approach to urban water management was developed in the 19th century for a particular linear urban form. Large, sprawling car-dependent cities are pushing this approach to new limits in rich cities and it has never worked in poor cities. An alternative which uses new small-scale technology and is more community-based, is suggested for both rich and poor countries. The Sydney Olympics and a demonstration project in Java show that the approach can work.

  20. A semi-automated image analysis procedure for in situ plankton imaging systems.

    PubMed

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.

  1. A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems

    PubMed Central

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260

  2. A Distributed Processing Approach to Payroll Time Reporting for a Large School District.

    ERIC Educational Resources Information Center

    Freeman, Raoul J.

    1983-01-01

    Describes a system for payroll reporting from geographically disparate locations in which data is entered, edited, and verified locally on minicomputers and then uploaded to a central computer for the standard payroll process. Communications and hardware, time-reporting software, data input techniques, system implementation, and its advantages are…

  3. Promoting Student-Centered Active Learning in Lectures with a Personal Response System

    ERIC Educational Resources Information Center

    Gauci, Sally A.; Dantas, Arianne M.; Williams, David A.; Kemm, Robert E.

    2009-01-01

    We investigated whether an active learning approach, facilitated by a personal response system, would lead to improved student engagement and learning outcomes in large-group physiology lectures for undergraduate science students. We focused on encouraging students' active learning in lectures, whereas previous studies have made more use of…

  4. Gender Differences in Student Performance in Large Lecture Classrooms Using Personal Response Systems ("Clickers") with Narrative Case Studies

    ERIC Educational Resources Information Center

    Kang, Hosun; Lundeberg, Mary; Wolter, Bjorn; delMas, Robert; Herreid, Clyde F.

    2012-01-01

    This study investigated gender differences in science learning between two pedagogical approaches: traditional lecture and narrative case studies using personal response systems ("clickers"). Thirteen instructors of introductory biology classes at 12 different institutions across the USA and Canada used two types of pedagogy (Clicker…

  5. Overcoming Barriers to Rural Children's Mental Health: An Interconnected Systems Public Health Model

    ERIC Educational Resources Information Center

    Huber, Brenda J.; Austen, Julie M.; Tobin, Renée M.; Meyers, Adena B.; Shelvin, Kristal H.; Wells, Michael

    2016-01-01

    A large, Midwestern county implemented a four-tiered public health model of children's mental health with an interconnected systems approach involving education, health care, juvenile justice and community mental health sectors. The community sought to promote protective factors in the lives of all youth, while improving the capacity,…

  6. Analysis of Student Responses to Peer-Instruction Conceptual Questions Answered Using an Electronic Response System: Trends by Gender and Ethnicity

    ERIC Educational Resources Information Center

    Steer, David; McConnell, David; Gray, Kyle; Kortz, Karen; Liang, Xin

    2009-01-01

    This descriptive study investigated students' answers to geoscience conceptual questions answered using electronic personal response systems. Answer patterns were examined to evaluate the peer-instruction pedagogical approach in a large general education classroom setting. (Contains 3 figures and 2 tables.)

  7. Development of Systematic Approaches for Calibration of Subsurface Transport Models Using Hard and Soft Data on System Characteristics and Behavior

    DTIC Science & Technology

    2011-02-02

    who graduated during this period and will receive scholarships or fellowships for further studies in science, mathematics, engineering or technology...nature or are collected at discrete points or localized areas in the system. The qualitative data includes, geology , large-scale stratigraphy and

  8. A comparison of approaches for estimating bottom-sediment mass in large reservoirs

    USGS Publications Warehouse

    Juracek, Kyle E.

    2006-01-01

    Estimates of sediment and sediment-associated constituent loads and yields from drainage basins are necessary for the management of reservoir-basin systems to address important issues such as reservoir sedimentation and eutrophication. One method for the estimation of loads and yields requires a determination of the total mass of sediment deposited in a reservoir. This method involves a sediment volume-to-mass conversion using bulk-density information. A comparison of four computational approaches (partition, mean, midpoint, strategic) for using bulk-density information to estimate total bottom-sediment mass in four large reservoirs indicated that the differences among the approaches were not statistically significant. However, the lack of statistical significance may be a result of the small sample size. Compared to the partition approach, which was presumed to provide the most accurate estimates of bottom-sediment mass, the results achieved using the strategic, mean, and midpoint approaches differed by as much as ?4, ?20, and ?44 percent, respectively. It was concluded that the strategic approach may merit further investigation as a less time consuming and less costly alternative to the partition approach.

  9. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  10. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  11. Computer-assisted education and interdisciplinary breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Whatmough, Pamela; Gale, Alastair G.; Wilson, A. R. M.

    1996-04-01

    The diagnosis of breast disease for screening or symptomatic women is largely arrived at by a multi-disciplinary team. We report work on the development and assessment of an inter- disciplinary computer based learning system to support the diagnosis of this disease. The diagnostic process is first modelled from different viewpoints and then appropriate knowledge structures pertinent to the domains of radiologist, pathologist and surgeon are depicted. Initially the underlying inter-relationships of the mammographic diagnostic approach were detailed which is largely considered here. Ultimately a system is envisaged which will link these specialties and act as a diagnostic aid as well as a multi-media educational system.

  12. A Portable Computer System for Auditing Quality of Ambulatory Care

    PubMed Central

    McCoy, J. Michael; Dunn, Earl V.; Borgiel, Alexander E.

    1987-01-01

    Prior efforts to effectively and efficiently audit quality of ambulatory care based on comprehensive process criteria have been limited largely by the complexity and cost of data abstraction and management. Over the years, several demonstration projects have generated large sets of process criteria and mapping systems for evaluating quality of care, but these paper-based approaches have been impractical to implement on a routine basis. Recognizing that portable microcomputers could solve many of the technical problems in abstracting data from medical records, we built upon previously described criteria and developed a microcomputer-based abstracting system that facilitates reliable and cost-effective data abstraction.

  13. LinkWinds: An Approach to Visual Data Analysis

    NASA Technical Reports Server (NTRS)

    Jacobson, Allan S.

    1992-01-01

    The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration and analysis system resulting from a NASA/JPL program of research into graphical methods for rapidly accessing, displaying and analyzing large multivariate multidisciplinary datasets. It is an integrated multi-application execution environment allowing the dynamic interconnection of multiple windows containing visual displays and/or controls through a data-linking paradigm. This paradigm, which results in a system much like a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but provides a highly intuitive, easy to learn user interface on top of the traditional graphical user interface.

  14. Abasy Atlas: a comprehensive inventory of systems, global network properties and systems-level elements across bacteria.

    PubMed

    Ibarra-Arellano, Miguel A; Campos-González, Adrián I; Treviño-Quintanilla, Luis G; Tauch, Andreas; Freyre-González, Julio A

    2016-01-01

    The availability of databases electronically encoding curated regulatory networks and of high-throughput technologies and methods to discover regulatory interactions provides an invaluable source of data to understand the principles underpinning the organization and evolution of these networks responsible for cellular regulation. Nevertheless, data on these sources never goes beyond the regulon level despite the fact that regulatory networks are complex hierarchical-modular structures still challenging our understanding. This brings the necessity for an inventory of systems across a large range of organisms, a key step to rendering feasible comparative systems biology approaches. In this work, we take the first step towards a global understanding of the regulatory networks organization by making a cartography of the functional architectures of diverse bacteria. Abasy ( A: cross- BA: cteria SY: stems) Atlas provides a comprehensive inventory of annotated functional systems, global network properties and systems-level elements (global regulators, modular genes shaping functional systems, basal machinery genes and intermodular genes) predicted by the natural decomposition approach for reconstructed and meta-curated regulatory networks across a large range of bacteria, including pathogenically and biotechnologically relevant organisms. The meta-curation of regulatory datasets provides the most complete and reliable set of regulatory interactions currently available, which can even be projected into subsets by considering the force or weight of evidence supporting them or the systems that they belong to. Besides, Abasy Atlas provides data enabling large-scale comparative systems biology studies aimed at understanding the common principles and particular lifestyle adaptions of systems across bacteria. Abasy Atlas contains systems and system-level elements for 50 regulatory networks comprising 78 649 regulatory interactions covering 42 bacteria in nine taxa, containing 3708 regulons and 1776 systems. All this brings together a large corpus of data that will surely inspire studies to generate hypothesis regarding the principles governing the evolution and organization of systems and the functional architectures controlling them.Database URL: http://abasy.ccg.unam.mx. © The Author(s) 2016. Published by Oxford University Press.

  15. Structural stability of nonlinear population dynamics.

    PubMed

    Cenci, Simone; Saavedra, Serguei

    2018-01-01

    In population dynamics, the concept of structural stability has been used to quantify the tolerance of a system to environmental perturbations. Yet, measuring the structural stability of nonlinear dynamical systems remains a challenging task. Focusing on the classic Lotka-Volterra dynamics, because of the linearity of the functional response, it has been possible to measure the conditions compatible with a structurally stable system. However, the functional response of biological communities is not always well approximated by deterministic linear functions. Thus, it is unclear the extent to which this linear approach can be generalized to other population dynamics models. Here, we show that the same approach used to investigate the classic Lotka-Volterra dynamics, which is called the structural approach, can be applied to a much larger class of nonlinear models. This class covers a large number of nonlinear functional responses that have been intensively investigated both theoretically and experimentally. We also investigate the applicability of the structural approach to stochastic dynamical systems and we provide a measure of structural stability for finite populations. Overall, we show that the structural approach can provide reliable and tractable information about the qualitative behavior of many nonlinear dynamical systems.

  16. Structural stability of nonlinear population dynamics

    NASA Astrophysics Data System (ADS)

    Cenci, Simone; Saavedra, Serguei

    2018-01-01

    In population dynamics, the concept of structural stability has been used to quantify the tolerance of a system to environmental perturbations. Yet, measuring the structural stability of nonlinear dynamical systems remains a challenging task. Focusing on the classic Lotka-Volterra dynamics, because of the linearity of the functional response, it has been possible to measure the conditions compatible with a structurally stable system. However, the functional response of biological communities is not always well approximated by deterministic linear functions. Thus, it is unclear the extent to which this linear approach can be generalized to other population dynamics models. Here, we show that the same approach used to investigate the classic Lotka-Volterra dynamics, which is called the structural approach, can be applied to a much larger class of nonlinear models. This class covers a large number of nonlinear functional responses that have been intensively investigated both theoretically and experimentally. We also investigate the applicability of the structural approach to stochastic dynamical systems and we provide a measure of structural stability for finite populations. Overall, we show that the structural approach can provide reliable and tractable information about the qualitative behavior of many nonlinear dynamical systems.

  17. Experiments with arbitrary networks in time-multiplexed delay systems

    NASA Astrophysics Data System (ADS)

    Hart, Joseph D.; Schmadel, Don C.; Murphy, Thomas E.; Roy, Rajarshi

    2017-12-01

    We report a new experimental approach using an optoelectronic feedback loop to investigate the dynamics of oscillators coupled on large complex networks with arbitrary topology. Our implementation is based on a single optoelectronic feedback loop with time delays. We use the space-time interpretation of systems with time delay to create large networks of coupled maps. Others have performed similar experiments using high-pass filters to implement the coupling; this restricts the network topology to the coupling of only a few nearest neighbors. In our experiment, the time delays and coupling are implemented on a field-programmable gate array, allowing the creation of networks with arbitrary coupling topology. This system has many advantages: the network nodes are truly identical, the network is easily reconfigurable, and the network dynamics occur at high speeds. We use this system to study cluster synchronization and chimera states in both small and large networks of different topologies.

  18. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  19. Improving the distinguishable cluster results: spin-component scaling

    NASA Astrophysics Data System (ADS)

    Kats, Daniel

    2018-06-01

    The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.

  20. Space construction base control system

    NASA Technical Reports Server (NTRS)

    Kaczynski, R. F.

    1979-01-01

    Several approaches for an attitude control system are studied and developed for a large space construction base that is structurally flexible. Digital simulations were obtained using the following techniques: (1) the multivariable Nyquist array method combined with closed loop pole allocation, (2) the linear quadratic regulator method. Equations for the three-axis simulation using the multilevel control method were generated and are presented. Several alternate control approaches are also described. A technique is demonstrated for obtaining the dynamic structural properties of a vehicle which is constructed of two or more submodules of known dynamic characteristics.

  1. Applications of the CRISPR-Cas9 system in cancer biology

    PubMed Central

    Sánchez-Rivera, Francisco J.; Jacks, Tyler

    2015-01-01

    Preface The prokaryotic type II clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 system is rapidly revolutionizing the field of genetic engineering, allowing researchers to alter the genomes of a large variety of organisms with relative ease. Experimental approaches based on this versatile technology have the potential to transform the field of cancer genetics. Here we review current approaches based on CRISPR-Cas9 for functional studies of cancer genes, with emphasis on its applicability for the development of the next-generation models of human cancer. PMID:26040603

  2. An Analysis of Information Technology Adoption by IRBs of Large Academic Medical Centers in the United States.

    PubMed

    He, Shan; Botkin, Jeffrey R; Hurdle, John F

    2015-02-01

    The clinical research landscape has changed dramatically in recent years in terms of both volume and complexity. This poses new challenges for Institutional Review Boards' (IRBs) review efficiency and quality, especially at large academic medical centers. This article discusses the technical facets of IRB modernization. We analyzed the information technology used by IRBs in large academic institutions across the United States. We found that large academic medical centers have a high electronic IRB adoption rate; however, the capabilities of electronic IRB systems vary greatly. We discuss potential use-cases of a fully exploited electronic IRB system that promise to streamline the clinical research work flow. The key to that approach utilizes a structured and standardized information model for the IRB application. © The Author(s) 2014.

  3. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  4. Fuzzy Document Clustering Approach using WordNet Lexical Categories

    NASA Astrophysics Data System (ADS)

    Gharib, Tarek F.; Fouad, Mohammed M.; Aref, Mostafa M.

    Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. This area is growing rapidly mainly because of the strong need for analysing the huge and large amount of textual data that reside on internal file systems and the Web. Text document clustering provides an effective navigation mechanism to organize this large amount of data by grouping their documents into a small number of meaningful classes. In this paper we proposed a fuzzy text document clustering approach using WordNet lexical categories and Fuzzy c-Means algorithm. Some experiments are performed to compare efficiency of the proposed approach with the recently reported approaches. Experimental results show that Fuzzy clustering leads to great performance results. Fuzzy c-means algorithm overcomes other classical clustering algorithms like k-means and bisecting k-means in both clustering quality and running time efficiency.

  5. Urban Security Initiative: Earthquake impacts on the urban ``system of systems``

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheshwari, S.; Jones, E.; Rasmussen, S.

    1999-06-01

    This paper is a discussion of how to address the problems of disasters in a large city, a project titled Urban Security Initiative undertaken by the Los Alamos National Laboratory. The paper first discusses the need to address the problems of disasters in large cities and ten provides a framework that is suitable to address this problem. The paper then provides an overview of the module of the project that deals with assessment of earthquake damage on urban infrastructure in large cities and an internet-based approach for consensus building leading to better coordination in the post-disaster period. Finally, the papermore » discusses the future direction of the project.« less

  6. Shape determination and control for large space structures

    NASA Technical Reports Server (NTRS)

    Weeks, C. J.

    1981-01-01

    An integral operator approach is used to derive solutions to static shape determination and control problems associated with large space structures. Problem assumptions include a linear self-adjoint system model, observations and control forces at discrete points, and performance criteria for the comparison of estimates or control forms. Results are illustrated by simulations in the one dimensional case with a flexible beam model, and in the multidimensional case with a finite model of a large space antenna. Modal expansions for terms in the solution algorithms are presented, using modes from the static or associated dynamic mode. These expansions provide approximated solutions in the event that a used form analytical solution to the system boundary value problem is not available.

  7. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  8. Interactive large-group teaching in a dermatology course.

    PubMed

    Ochsendorf, F R; Boehncke, W-H; Sommerlad, M; Kaufmann, R

    2006-12-01

    This is a prospective study to find out whether an interactive large-group case-based teaching approach combined with small-group bedside teaching improves student satisfaction and learning outcome in a practical dermatology course. During two consecutive terms a rotating system of large-group interactive case-study-method teaching with two tutors (one content expert, one process facilitator) and bedside teaching with randomly appointed tutors was evaluated with a nine-item questionnaire and multiple-choice test performed at the beginning and the end of the course (n = 204/231 students evaluable). The results of three different didactic approaches utilized over the prior year served as a control. The interactive course was rated significantly better (p < 0.0001) than the standard course with regard to all items. The aggregate mark given by the students for the whole course was 1.58-0.61 (mean +/- SD, range 1 (good)-5 (poor)). This was significantly better than the standard course (p < 0.0001) and not different from small-group teaching approaches. The mean test results in the final examination improved significantly (p < 0.01). The combination of large-group interactive teaching and small-group bedside teaching was well accepted, improved the learning outcome, was rated as good as a small-group didactic approach and needed fewer resources in terms of personnel.

  9. Stepwise approach to myopathy in systemic disease.

    PubMed

    Chawla, Jasvinder

    2011-01-01

    Muscle diseases can constitute a large variety of both acquired and hereditary disorders. Myopathies in systemic disease results from several different disease processes including endocrine, inflammatory, paraneoplastic, infectious, drug- and toxin-induced, critical illness myopathy, metabolic, and myopathies with other systemic disorders. Patients with systemic myopathies often present acutely or sub acutely. On the other hand, familial myopathies or dystrophies generally present in a chronic fashion with exceptions of metabolic myopathies where symptoms on occasion can be precipitated acutely. Most of the inflammatory myopathies can have a chance association with malignant lesions; the incidence appears to be specifically increased only in patients with dermatomyositis. In dealing with myopathies associated with systemic illnesses, the focus will be on the acquired causes. Management is beyond the scope of this chapter. Prognosis is based upon the underlying cause and, most of the time, carries a good prognosis. In order to approach a patient with suspected myopathy from systemic disease, a stepwise approach is utilized.

  10. Real-Time Smart Grids Control for Preventing Cascading Failures and Blackout using Neural Networks: Experimental Approach for N-1-1 Contingency

    NASA Astrophysics Data System (ADS)

    Zarrabian, Sina; Belkacemi, Rabie; Babalola, Adeniyi A.

    2016-12-01

    In this paper, a novel intelligent control is proposed based on Artificial Neural Networks (ANN) to mitigate cascading failure (CF) and prevent blackout in smart grid systems after N-1-1 contingency condition in real-time. The fundamental contribution of this research is to deploy the machine learning concept for preventing blackout at early stages of its occurrence and to make smart grids more resilient, reliable, and robust. The proposed method provides the best action selection strategy for adaptive adjustment of generators' output power through frequency control. This method is able to relieve congestion of transmission lines and prevent consecutive transmission line outage after N-1-1 contingency condition. The proposed ANN-based control approach is tested on an experimental 100 kW test system developed by the authors to test intelligent systems. Additionally, the proposed approach is validated on the large-scale IEEE 118-bus power system by simulation studies. Experimental results show that the ANN approach is very promising and provides accurate and robust control by preventing blackout. The technique is compared to a heuristic multi-agent system (MAS) approach based on communication interchanges. The ANN approach showed more accurate and robust response than the MAS algorithm.

  11. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  12. Towards a comprehensive understanding of emerging dynamics and function of pancreatic islets: A complex network approach. Comment on "Network science of biological systems at different scales: A review" by Gosak et al.

    NASA Astrophysics Data System (ADS)

    Loppini, Alessandro

    2018-03-01

    Complex network theory represents a comprehensive mathematical framework to investigate biological systems, ranging from sub-cellular and cellular scales up to large-scale networks describing species interactions and ecological systems. In their exhaustive and comprehensive work [1], Gosak et al. discuss several scenarios in which the network approach was able to uncover general properties and underlying mechanisms of cells organization and regulation, tissue functions and cell/tissue failure in pathology, by the study of chemical reaction networks, structural networks and functional connectivities.

  13. Coordinated scheduling for dynamic real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei

    1994-01-01

    In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.

  14. Automatic Fastening Large Structures: a New Approach

    NASA Technical Reports Server (NTRS)

    Lumley, D. F.

    1985-01-01

    The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.

  15. Toward a Systems Approach to Enteric Pathogen Transmission: From Individual Independence to Community Interdependence

    PubMed Central

    Eisenberg, Joseph N.S.; Trostle, James; Sorensen, Reed J.D.; Shields, Katherine F.

    2012-01-01

    Diarrheal disease is still a major cause of mortality and morbidity worldwide; thus a large body of research has been produced describing its risks. We review more than four decades of literature on diarrheal disease epidemiology. These studies detail a progression in the conceptual understanding of transmission of enteric pathogens and demonstrate that diarrheal disease is caused by many interdependent pathways. However, arguments by diarrheal disease researchers in favor of attending to interaction and interdependencies have only recently yielded more formal systems-level approaches. Therefore, interdependence has not yet been highlighted in significant new research initiatives or policy decisions. We argue for a systems-level framework that will contextualize transmission and inform prevention and control efforts so that they can integrate transmission pathways. These systems approaches should be employed to account for community effects (i.e., interactions among individuals and/or households). PMID:22224881

  16. A mesh-free approach to acoustic scattering from multiple spheres nested inside a large sphere by using diagonal translation operators.

    PubMed

    Hesford, Andrew J; Astheimer, Jeffrey P; Greengard, Leslie F; Waag, Robert C

    2010-02-01

    A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method.

  17. A mesh-free approach to acoustic scattering from multiple spheres nested inside a large sphere by using diagonal translation operators

    PubMed Central

    Hesford, Andrew J.; Astheimer, Jeffrey P.; Greengard, Leslie F.; Waag, Robert C.

    2010-01-01

    A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method. PMID:20136208

  18. School Finance: A Primer. A Practical Guide to the Structural Components of, Alternative Approaches to, and Policy Questions about State School Finance Systems.

    ERIC Educational Resources Information Center

    Augenblick, John; And Others

    Although school funding structures are similar in many ways across the states, no two states have school finance systems that are precisely the same. School finance systems which are used to achieve multiple objectives, must consider characteristics of numerous school districts, distribute large amounts of money, and have developed incrementally…

  19. Integrative Genomics and Computational Systems Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Huang, Yufei; Zhang, Bing

    The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.

  20. From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.

    PubMed

    Poldrack, Russell A; Yarkoni, Tal

    2016-01-01

    A major goal of cognitive neuroscience is to delineate how brain systems give rise to mental function. Here we review the increasingly large role informatics-driven approaches are playing in such efforts. We begin by reviewing a number of challenges conventional neuroimaging approaches face in trying to delineate brain-cognition mappings--for example, the difficulty in establishing the specificity of postulated associations. Next, we demonstrate how these limitations can potentially be overcome using complementary approaches that emphasize large-scale analysis--including meta-analytic methods that synthesize hundreds or thousands of studies at a time; latent-variable approaches that seek to extract structure from data in a bottom-up manner; and predictive modeling approaches capable of quantitatively inferring mental states from patterns of brain activity. We highlight the underappreciated but critical role for formal cognitive ontologies in helping to clarify, refine, and test theories of brain and cognitive function. Finally, we conclude with a speculative discussion of what future informatics developments may hold for cognitive neuroscience.

  1. From brain maps to cognitive ontologies: informatics and the search for mental structure

    PubMed Central

    Poldrack, Russell A.; Yarkoni, Tal

    2015-01-01

    A major goal of cognitive neuroscience is to delineate how brain systems give rise to mental function. Here we review the increasingly large role informatics-driven approaches are playing in such efforts. We begin by reviewing a number of challenges conventional neuroimaging approaches face in trying to delineate brain-cognition mappings—for example, the difficulty in establishing the specificity of postulated associations. Next, we demonstrate how these limitations can potentially be overcome using complementary approaches that emphasize large-scale analysis—including meta-analytic methods that synthesize hundreds or thousands of studies at a time; latent-variable approaches that seek to extract structure from data in a bottom-up manner; and predictive modeling approaches capable of quantitatively inferring mental states from patterns of brain activity. We highlight the underappreciated but critical role for formal cognitive ontologies in helping to clarify, refine, and test theories of brain and cognitive function. Finally, we conclude with a speculative discussion of what future informatics developments may hold for cognitive neuroscience. PMID:26393866

  2. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  3. Exploring Complex Systems Aspects of Blackout Risk and Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, David E; Carreras, Benjamin A; Lynch, Vickie E

    2011-01-01

    Electric power transmission systems are a key infrastructure, and blackouts of these systems have major consequences for the economy and national security. Analyses of blackout data suggest that blackout size distributions have a power law form over much of their range. This result is an indication that blackouts behave as a complex dynamical system. We use a simulation of an upgrading power transmission system to investigate how these complex system dynamics impact the assessment and mitigation of blackout risk. The mitigation of failures in complex systems needs to be approached with care. The mitigation efforts can move the system tomore » a new dynamic equilibrium while remaining near criticality and preserving the power law region. Thus, while the absolute frequency of blackouts of all sizes may be reduced, the underlying forces can still cause the relative frequency of large blackouts to small blackouts to remain the same. Moreover, in some cases, efforts to mitigate small blackouts can even increase the frequency of large blackouts. This result occurs because the large and small blackouts are not mutually independent, but are strongly coupled by the complex dynamics.« less

  4. Knowledge base rule partitioning design for CLIPS

    NASA Technical Reports Server (NTRS)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  5. Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Lang, Michael; Pakin, Scott

    2011-09-30

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  6. Massive Multi-Agent Systems Control

    NASA Technical Reports Server (NTRS)

    Campagne, Jean-Charles; Gardon, Alain; Collomb, Etienne; Nishida, Toyoaki

    2004-01-01

    In order to build massive multi-agent systems, considered as complex and dynamic systems, one needs a method to analyze and control the system. We suggest an approach using morphology to represent and control the state of large organizations composed of a great number of light software agents. Morphology is understood as representing the state of the multi-agent system as shapes in an abstract geometrical space, this notion is close to the notion of phase space in physics.

  7. Array-type miniature interferometer as the core optical microsystem of an optical coherence tomography device for tissue inspection

    NASA Astrophysics Data System (ADS)

    Passilly, Nicolas; Perrin, Stéphane; Lullin, Justine; Albero, Jorge; Bargiel, Sylwester; Froehly, Luc; Gorecki, Christophe; Krauter, Johann; Osten, Wolfgang; Wang, Wei-Shan; Wiemer, Maik

    2016-04-01

    Some of the critical limitations for widespread use in medical applications of optical devices, such as confocal or optical coherence tomography (OCT) systems, are related to their cost and large size. Indeed, although quite efficient systems are available on the market, e.g. in dermatology, they equip only a few hospitals and hence, are far from being used as an early detection tool, for instance in screening of patients for early detection of cancers. In this framework, the VIAMOS project aims at proposing a concept of miniaturized, batch-fabricated and lower-cost, OCT system dedicated to non-invasive skin inspection. In order to image a large skin area, the system is based on a full-field approach. Moreover, since it relies on micro-fabricated devices whose fields of view are limited, 16 small interferometers are arranged in a dense array to perform multi-channel simultaneous imaging. Gaps between each channel are then filled by scanning of the system followed by stitching. This approach allows imaging a large area without the need of large optics. It also avoids the use of very fast and often expensive laser sources, since instead of a single point detector, almost 250 thousands pixels are used simultaneously. The architecture is then based on an array of Mirau interferometers which are interesting for their vertical arrangement compatible with vertical assembly at the wafer-level. Each array is consequently a local part of a stack of seven wafers. This stack includes a glass lens doublet, an out-of-plane actuated micro-mirror for phase shifting, a spacer and a planar beam-splitter. Consequently, different materials, such as silicon and glass, are bonded together and well-aligned thanks to lithographic-based fabrication processes.

  8. System engineering of the Atacama Large Millimeter/submillimeter Array

    NASA Astrophysics Data System (ADS)

    Bhatia, Ravinder; Marti, Javier; Sugimoto, Masahiro; Sramek, Richard; Miccolis, Maurizio; Morita, Koh-Ichiro; Arancibia, Demián.; Araya, Andrea; Asayama, Shin'ichiro; Barkats, Denis; Brito, Rodrigo; Brundage, William; Grammer, Wes; Haupt, Christoph; Kurlandczyk, Herve; Mizuno, Norikazu; Napier, Peter; Pizarro, Eduardo; Saini, Kamaljeet; Stahlman, Gretchen; Verzichelli, Gianluca; Whyborn, Nick; Yagoubov, Pavel

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) will be composed of 66 high precision antennae located at 5000 meters altitude in northern Chile. This paper will present the methodology, tools and processes adopted to system engineer a project of high technical complexity, by system engineering teams that are remotely located and from different cultures, and in accordance with a demanding schedule and within tight financial constraints. The technical and organizational complexity of ALMA requires a disciplined approach to the definition, implementation and verification of the ALMA requirements. During the development phase, System Engineering chairs all technical reviews and facilitates the resolution of technical conflicts. We have developed analysis tools to analyze the system performance, incorporating key parameters that contribute to the ultimate performance, and are modeled using best estimates and/or measured values obtained during test campaigns. Strict tracking and control of the technical budgets ensures that the different parts of the system can operate together as a whole within ALMA boundary conditions. System Engineering is responsible for acceptances of the thousands of hardware items delivered to Chile, and also supports the software acceptance process. In addition, System Engineering leads the troubleshooting efforts during testing phases of the construction project. Finally, the team is conducting System level verification and diagnostics activities to assess the overall performance of the observatory. This paper will also share lessons learned from these system engineering and verification approaches.

  9. A Framework of Working Across Disciplines in Early Design and R&D of Large Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.

    2015-01-01

    This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.

  10. On science versus engineering in hydrological modelling

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke

    2017-04-01

    It is always stressed that hydrological modelling is very important, to prevent floods, to mitigate droughts, to ensure food production or nature conservation. All very true, but I believe that focussing so much on the application of our knowledge (which I call `the engineering approach'), does not stimulate thorough system understanding (which I call `the scientific approach'). In many studies, science and engineering approaches are mixed, which results in large uncertainty e.g. due to a lack of system understanding. To what extent engineering and science approached are mixed depends on the Philosophy of Science of the researcher; verificationism seems to be closer related to engineering, than falsificationism or Bayesianism. In order to grow our scientific knowledge, which means increasing our understanding of the system, we need to be more critical towards the models that we use, but also recognize all the processes that influence the hydrological cycle. In an era called 'The Anthropocene' the influence of humans on the water system can no longer be neglected, and if we choose a scientific approach we have to account for human-induced processes. Summarizing, I believe that we have to account for human impact on the hydrological system, but we have to resist the temptation to directly quantify the hydrological impact on the human system.

  11. Decentralized diagnostics based on a distributed micro-genetic algorithm for transducer networks monitoring large experimental systems.

    PubMed

    Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M

    2014-09-01

    Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.

  12. Challenges in Managing Trustworthy Large-scale Digital Science

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  13. A scientific operations plan for the large space telescope. [ground support system design

    NASA Technical Reports Server (NTRS)

    West, D. K.

    1977-01-01

    The paper describes an LST ground system which is compatible with the operational requirements of the LST. The goal of the approach is to minimize the cost of post launch operations without seriously compromising the quality and total throughput of LST science. Attention is given to cost constraints and guidelines, the telemetry operations processing systems (TELOPS), the image processing facility, ground system planning and data flow, and scientific interfaces.

  14. An adaptive learning control system for large flexible structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.

    1985-01-01

    The objective of the research has been to study the design of adaptive/learning control systems for the control of large flexible structures. In the first activity an adaptive/learning control methodology for flexible space structures was investigated. The approach was based on using a modal model of the flexible structure dynamics and an output-error identification scheme to identify modal parameters. In the second activity, a least-squares identification scheme was proposed for estimating both modal parameters and modal-to-actuator and modal-to-sensor shape functions. The technique was applied to experimental data obtained from the NASA Langley beam experiment. In the third activity, a separable nonlinear least-squares approach was developed for estimating the number of excited modes, shape functions, modal parameters, and modal amplitude and velocity time functions for a flexible structure. In the final research activity, a dual-adaptive control strategy was developed for regulating the modal dynamics and identifying modal parameters of a flexible structure. A min-max approach was used for finding an input to provide modal parameter identification while not exceeding reasonable bounds on modal displacement.

  15. A flight evaluation of VTOL jet transport under visual and simulated instrument conditions

    NASA Technical Reports Server (NTRS)

    Holzhauser, C. A.; Morello, S. A.; Innis, R. C.; Patton, J. M., Jr.

    1972-01-01

    A flight investigation was performed with the Dornier DO-31 VTOL to evaluate the performance, handling qualities, and operating characteristics that are considered to be important in the operation of a commerical VTOL transport in the terminal area. The DO-31, a 20,000 kilogram transport, has a mixed jet propulsion system; main engines with nozzles deflect from a cruise to a hover position, and vertical lift engines operated below 170 knots. This VTOL mode incorporates pitch and roll attitude and yaw rate stabilization. The tests concentrated on the transition, approach, and vertical landing. The mixed jet propulsion system provided a large usable performance envelope that enabled simulated IFR approaches to be made on 7 deg and 12 deg glide slopes. In these approaches management of thrust magnitude and direction was a primary problem, and some form of integrating the controls will be necessary. The handling qualities evaluation pointed out the need for additional research of define flight path criteria. The aircraft had satisfactory control and stability in hover out of ground effect. The recirculation effects in vertical landing were large below 15 meters.

  16. Effects of wildfire on catchment runoff response: a modeling approach to detect changes in snow-dominated forested catchments

    Treesearch

    Jan Seibert; Jeffrey J. McDonnell; Richard D. Woodsmith

    2010-01-01

    Wildfire is an important disturbance affecting hydrological processes through alteration of vegetation cover and soil characteristics. The effects of fire on hydrological systems at the catchment scale are not well known, largely because site specific data from both before and after wildfire are rare. In this study a modelling approach was employed for change detection...

  17. Development of an Automated Impact Hammer for Modal Analysis of Structures

    DTIC Science & Technology

    2012-02-01

    6 3.5 Integration with FBG interrogation system . . . . . . . . . . . . . . . . . 7 4 Experimental...distributed Fibre Bragg Gratings ( FBGs ) in optical fibres. The modified approach to SIDER has been given the name iSIDER or inverse SIDER to reflect the...response is measured at many locations using a large array of surface mounted FBG strain sensors [2]. FBGs are ideally suited to the roving response approach

  18. Determination of aerodynamic sensitivity coefficients for wings in transonic flow

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.; El-Banna, Hesham M.

    1992-01-01

    The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.

  19. The Ovine Cerebral Venous System: Comparative Anatomy, Visualization, and Implications for Translational Research

    PubMed Central

    Nitzsche, Björn; Lobsien, Donald; Seeger, Johannes; Schneider, Holm; Boltze, Johannes

    2014-01-01

    Cerebrovascular diseases are significant causes of death and disability in humans. Improvements in diagnostic and therapeutic approaches strongly rely on adequate gyrencephalic, large animal models being demanded for translational research. Ovine stroke models may represent a promising approach but are currently limited by insufficient knowledge regarding the venous system of the cerebral angioarchitecture. The present study was intended to provide a comprehensive anatomical analysis of the intracranial venous system in sheep as a reliable basis for the interpretation of experimental results in such ovine models. We used corrosion casts as well as contrast-enhanced magnetic resonance venography to scrutinize blood drainage from the brain. This combined approach yielded detailed and, to some extent, novel findings. In particular, we provide evidence for chordae Willisii and lateral venous lacunae, and report on connections between the dorsal and ventral sinuses in this species. For the first time, we also describe venous confluences in the deep cerebral venous system and an ‘anterior condylar confluent’ as seen in humans. This report provides a detailed reference for the interpretation of venous diagnostic imaging findings in sheep, including an assessment of structure detectability by in vivo (imaging) versus ex vivo (corrosion cast) visualization methods. Moreover, it features a comprehensive interspecies-comparison of the venous cerebral angioarchitecture in man, rodents, canines and sheep as a relevant large animal model species, and describes possible implications for translational cerebrovascular research. PMID:24736654

  20. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach.

    PubMed

    Plehn, Thomas; May, Volkhard

    2017-01-21

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  1. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  2. Spacewatch Survey of the Solar System

    NASA Technical Reports Server (NTRS)

    McMillan, Robert S.

    2000-01-01

    The purpose of the Spacewatch project is to explore the various populations of small objects throughout the solar system. Statistics on all classes of small bodies are needed to infer their physical and dynamical evolution. More Earth Approachers need to be found to assess the impact hazard. (We have adopted the term "Earth Approacher", EA, to include all those asteroids, nuclei of extinct short period comets, and short period comets that can approach close to Earth. The adjective "near" carries potential confusion, as we have found in communicating with the media, that the objects are always near Earth, following it like a cloud.) Persistent and voluminous accumulation of astrometry of incidentally observed main belt asteroids MBAs will eventually permit the Minor Planet Center (MPQ to determine the orbits of large numbers (tens of thousands) of asteroids. Such a large body of information will ultimately allow better resolution of orbit classes and the determinations of luminosity functions of the various classes, Comet and asteroid recoveries are essential services to planetary astronomy. Statistics of objects in the outer solar system (Centaurs, scattered-disk objects, and Trans-Neptunian Objects; TNOs) ultimately will tell part of the story of solar system evolution. Spacewatch led the development of sky surveying by electronic means and has acted as a responsible interface to the media and general public on this discipline and on the issue of the hazard from impacts by asteroids and comets.

  3. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach

    NASA Astrophysics Data System (ADS)

    Plehn, Thomas; May, Volkhard

    2017-01-01

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  4. Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.

    PubMed

    Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal

    2016-12-01

    In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for each category including author details, technique, disease and utility/accuracy.

  5. A Modular Approach To Developing A Large Deployable Reflector

    NASA Astrophysics Data System (ADS)

    Pittman, R.; Leidich, C.; Mascy, F.; Swenson, B.

    1984-01-01

    NASA is currently studying the feasibility of developing a Large Deployable Reflector (LDR) astronomical facility to perform astrophysical studies of the infrared and submillimeter portion of the spectrum in the mid 1990's. The LDR concept was recommended by the Astronomy Survey Committee of the National Academy of Sciences as one of two space based projects to be started this decade. The current baseline calls for a 20 m (65.6 ft) aperture telescope diffraction limited at 30 μm and automatically deployed from a single Shuttle launch. The volume, performance, and single launch constraints place great demands on the technology and place LDR beyond the state-of-the-art in certain areas such as lightweight reflector segments. The advent of the Shuttle is opening up many new options and capabilities for producing large space systems. Until now, LDR has always been conceived as an integrated system, deployed autonomously in a single launch. This paper will look at a combination of automatic deployment and on-orbit assembly that may reduce the technological complexity and cost of the LDR system. Many technological tools are now in use or under study that will greatly enhance our capabilities to do assembly in space. Two Shuttle volume budget scenarios will be examined to assess the potential of these tools to reduce the LDR system complexity. Further study will be required to reach the full optimal combination of deployment and assembly, since in most cases the capabilities of these new tools have not been demonstrated. In order to take maximum advantage of these concepts, the design of LDR must be flexible and allow one subsystem to be modified without adversely affecting the entire system. One method of achieving this flexibility is to use a modular design approach in which the major subsystems are physically separated during launch and assembled on orbit. A modular design approach facilitates this flexibility but requires that the subsystems be interfaced in a simple, straightforward, and controlled manner. NASA is currently defining a technology development plan for LDR which will identify the technology advances that are required. The modular approach offers the flexibility to easily incorporate these new advances into the design.

  6. What will the future of cloud-based astronomical data processing look like?

    NASA Astrophysics Data System (ADS)

    Green, Andrew W.; Mannering, Elizabeth; Harischandra, Lloyd; Vuong, Minh; O'Toole, Simon; Sealey, Katrina; Hopkins, Andrew M.

    2017-06-01

    Astronomy is rapidly approaching an impasse: very large datasets require remote or cloud-based parallel processing, yet many astronomers still try to download the data and develop serial code locally. Astronomers understand the need for change, but the hurdles remain high. We are developing a data archive designed from the ground up to simplify and encourage cloud-based parallel processing. While the volume of data we host remains modest by some standards, it is still large enough that download and processing times are measured in days and even weeks. We plan to implement a python based, notebook-like interface that automatically parallelises execution. Our goal is to provide an interface sufficiently familiar and user-friendly that it encourages the astronomer to run their analysis on our system in the cloud-astroinformatics as a service. We describe how our system addresses the approaching impasse in astronomy using the SAMI Galaxy Survey as an example.

  7. Power monitoring and control for large scale projects: SKA, a case study

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis

    2016-07-01

    Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.

  8. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    PubMed

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  9. Modeling of Kerena Emergency Condenser

    NASA Astrophysics Data System (ADS)

    Bryk, Rafał; Schmidt, Holger; Mull, Thomas; Wagner, Thomas; Ganzmann, Ingo; Herbst, Oliver

    2017-12-01

    KERENA is an innovative boiling water reactor concept equipped with several passive safety systems. For the experimental verification of performance of the systems and for codes validation, the Integral Test Stand Karlstein (INKA) was built in Karlstein, Germany. The emergency condenser (EC) system transfers heat from the reactor pressure vessel (RPV) to the core flooding pool in case of water level decrease in the RPV. EC is composed of a large number of slightly inclined tubes. During accident conditions, steam enters into the tubes and condenses due to the contact of the tubes with cold water at the secondary side. The condensed water flows then back to the RPV due to gravity. In this paper two approaches for modeling of condensation in slightly inclined tubes are compared and verified against experiments. The first approach is based on the flow regime map. Depending on the regime, heat transfer coefficient is calculated according to specific semi-empirical correlation. The second approach uses a general, fully-empirical correlation. The models are developed with utilization of the object-oriented Modelica language and the open-source OpenModelica environment. The results are compared with data obtained during a large scale integral test, simulating loss of coolant accident performed at Integral Test Stand Karlstein (INKA). The comparison shows a good agreement.Due to the modularity of models, both of them may be used in the future in systems incorporating condensation in horizontal or slightly inclined tubes. Depending on his preferences, the modeller may choose one-equation based approach or more sophisticated model composed of several exchangeable semi-empirical correlations.

  10. Simulation model of a gear synchronisation unit for application in a real-time HiL environment

    NASA Astrophysics Data System (ADS)

    Kirchner, Markus; Eberhard, Peter

    2017-05-01

    Gear shifting simulations using the multibody system approach and the finite-element method are standard in the development of transmissions. However, the corresponding models are typically large due to the complex geometries and numerous contacts, which causes long calculation times. The present work sets itself apart from these detailed shifting simulations by proposing a much simpler but powerful synchronisation model which can be computed in real-time while it is still more realistic than a pure rigid multibody model. Therefore, the model is even used as part of a Hardware-in-the-Loop (HiL) test rig. The proposed real-time capable synchronization model combines the rigid multibody system approach with a multiscale simulation approach. The multibody system approach is suitable for the description of the large motions. The multiscale simulation approach is using also the finite-element method suitable for the analysis of the contact processes. An efficient contact search for the claws of a car transmission synchronisation unit is described in detail which shortens the required calculation time of the model considerably. To further shorten the calculation time, the use of a complex pre-synchronisation model with a nonlinear contour is presented. The model has to provide realistic results with the time-step size of the HiL test rig. To reach this specification, a particularly adapted multirate method for the synchronisation model is shown. Measured results of test rigs of the real-time capable synchronisation model are verified on plausibility. The simulation model is then also used in the HiL test rig for a transmission control unit.

  11. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  12. Web-Based Personalised System of Instruction: An Effective Approach for Diverse Cohorts with Virtual Learning Environments?

    ERIC Educational Resources Information Center

    Rae, Andrew; Samuels, Peter

    2011-01-01

    The Personalised System of Instruction is a form of mastery learning which, though it has been proven to be educationally effective, has never seriously challenged the dominant lecture-tutorial teaching method in higher education and has largely fallen into disuse. An information and communications technology assisted version of the Personalised…

  13. The Predictive Validity of a Gender-Responsive Needs Assessment: An Exploratory Study

    ERIC Educational Resources Information Center

    Salisbury, Emily J.; Van Voorhis, Patricia; Spiropoulos, Georgia V.

    2009-01-01

    Risk assessment and classification systems for women have been largely derived from male-based systems. As a result, many of the needs unique to women are not formally assessed or treated. Emerging research advocating a gender-responsive approach to the supervision and treatment of women offenders suggests that needs such as abuse, mental health,…

  14. Moving Knowledge Around: Strategies for Fostering Equity within Educational Systems

    ERIC Educational Resources Information Center

    Ainscow, Mel

    2012-01-01

    This paper describes and analyses the work of a large scale improvement project in England in order to find more effective ways of fostering equity within education systems. The project involved an approach based on an analysis of local context, and used processes of networking and collaboration in order to make better use of available expertise.…

  15. Solar power satellite: System definition study. Part 1, volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A study of the solar power satellite system, which represents a means of tapping baseload electric utility power from the sun on a large scale, was summarized. Study objectives, approach, and planning are presented along with an energy conversion evaluation. Basic requirements were considered in regard to space transportation, construction, and maintainability.

  16. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  17. Model-based Executive Control through Reactive Planning for Autonomous Rovers

    NASA Technical Reports Server (NTRS)

    Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola

    2004-01-01

    This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.

  18. Accounting for intra-molecular vibrational modes in open quantum system description of molecular systems.

    PubMed

    Roden, Jan; Strunz, Walter T; Whaley, K Birgitta; Eisfeld, Alexander

    2012-11-28

    Electronic-vibrational dynamics in molecular systems that interact with an environment involve a large number of degrees of freedom and are therefore often described by means of open quantum system approaches. A popular approach is to include only the electronic degrees of freedom into the system part and to couple these to a non-Markovian bath of harmonic vibrational modes that is characterized by a spectral density. Since this bath represents both intra-molecular and external vibrations, it is important to understand how to construct a spectral density that accounts for intra-molecular vibrational modes that couple further to other modes. Here, we address this problem by explicitly incorporating an intra-molecular vibrational mode together with the electronic degrees of freedom into the system part and using the Fano theory for a resonance coupled to a continuum to derive an "effective" bath spectral density, which describes the contribution of intra-molecular modes. We compare this effective model for the intra-molecular mode with the method of pseudomodes, a widely used approach in simulation of non-Markovian dynamics. We clarify the difference between these two approaches and demonstrate that the respective resulting dynamics and optical spectra can be very different.

  19. Second-order relativistic corrections for the S(L=0) states in one- and two-electron atomic systems

    NASA Astrophysics Data System (ADS)

    Frolov, A. M.; Mitelut, C. C.; Zhong, Z.

    2005-01-01

    An analytical approach is developed to compute the first- (similar to alpha(2)) and second-order (similar to alpha(4)) relativistic corrections in one- and two-electron atomic systems. The approach is based on the reduction of all operators to divergent (singular) and nondivergent (regular) parts. Then, we show that all the divergent parts from the differentmatrix elements cancel each other. The remaining expression contains only regular operators and its expectation value can be easily computed. Analysis of the S(L = 0) states in such systems is of specific interest since the corresponding operators for these states contain a large number of singularities. For one-electron systems the computed relativistic corrections coincide exactly with the appropriate result that follows from the Taylor expansion of the relativistic (i.e., Dirac) energy. We also discuss an alternative approach that allows one to cancel all singularities by using the so-called operator-compensation technique. This second approach is found to be very effective in applications of more complex systems, such as helium-like atoms and ions, H-2(+)-like ions, and some exotic three-body systems.

  20. Systems biology coupled with label-free high-throughput detection as a novel approach for diagnosis of chronic obstructive pulmonary disease

    PubMed Central

    Richens, Joanna L; Urbanowicz, Richard A; Lunt, Elizabeth AM; Metcalf, Rebecca; Corne, Jonathan; Fairclough, Lucy; O'Shea, Paul

    2009-01-01

    Chronic obstructive pulmonary disease (COPD) is a treatable and preventable disease state, characterised by progressive airflow limitation that is not fully reversible. Although COPD is primarily a disease of the lungs there is now an appreciation that many of the manifestations of disease are outside the lung, leading to the notion that COPD is a systemic disease. Currently, diagnosis of COPD relies on largely descriptive measures to enable classification, such as symptoms and lung function. Here the limitations of existing diagnostic strategies of COPD are discussed and systems biology approaches to diagnosis that build upon current molecular knowledge of the disease are described. These approaches rely on new 'label-free' sensing technologies, such as high-throughput surface plasmon resonance (SPR), that we also describe. PMID:19386108

  1. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  2. Quantifying the benefits of a building retrofit using an integrated system approach: A case study

    DOE PAGES

    Regnier, Cynthia; Sun, Kaiyu; Hong, Tianzhen; ...

    2017-11-11

    Building retrofits provide a large opportunity to significantly reduce energy consumption in the buildings sector. Traditional building retrofits focus on equipment upgrades, often at the end of equipment life or failure, and result in replacement with marginally improved similar technology and limited energy savings. The Integrated System (IS) retrofit approach enables much greater energy savings by leveraging interactive effects between end use systems, enabling downsized or lower energy technologies. This work presents a case study in Hawaii quantifying the benefits of an IS retrofit approach compared to two traditional retrofit approaches: a Standard Practice of upgrading equipment to meet minimummore » code requirements, and an Improved Practice of upgrading equipment to a higher efficiency. The IS approach showed an energy savings of 84% over existing building energy use, much higher than the traditional approaches of 13% and 33%. The IS retrofit also demonstrated the greatest energy cost savings potential. While the degree of savings realized from the IS approach will vary by building and climate, these findings indicate that savings on the order of 50% and greater are not possible without an IS approach. It is therefore recommended that the IS approach be universally adopted to achieve deep energy savings.« less

  3. Quantifying the benefits of a building retrofit using an integrated system approach: A case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regnier, Cynthia; Sun, Kaiyu; Hong, Tianzhen

    Building retrofits provide a large opportunity to significantly reduce energy consumption in the buildings sector. Traditional building retrofits focus on equipment upgrades, often at the end of equipment life or failure, and result in replacement with marginally improved similar technology and limited energy savings. The Integrated System (IS) retrofit approach enables much greater energy savings by leveraging interactive effects between end use systems, enabling downsized or lower energy technologies. This work presents a case study in Hawaii quantifying the benefits of an IS retrofit approach compared to two traditional retrofit approaches: a Standard Practice of upgrading equipment to meet minimummore » code requirements, and an Improved Practice of upgrading equipment to a higher efficiency. The IS approach showed an energy savings of 84% over existing building energy use, much higher than the traditional approaches of 13% and 33%. The IS retrofit also demonstrated the greatest energy cost savings potential. While the degree of savings realized from the IS approach will vary by building and climate, these findings indicate that savings on the order of 50% and greater are not possible without an IS approach. It is therefore recommended that the IS approach be universally adopted to achieve deep energy savings.« less

  4. System implications of large radiometric array antennas

    NASA Technical Reports Server (NTRS)

    Levis, C. A.; Lin, H. C.

    1976-01-01

    Current radiometric earth and atmospheric sensing systems in the centimeter wavelength range generally employ a directive antenna connected through a single terminal pair to a Dicke receiver. It is shown that this approach does not lend itself to systems with greatly increased spatial resolution. Signal to noise considerations relating to antenna efficiency force the introduction of active elements at the subarray level; thus, if Dicke switching is to be used, it must be distributed throughout the system. Some possible approaches are suggested. The introduction of active elements at the subarray level is found to ease the design constraints on time delay elements, necessary for bandwidth, and on multiple beam generation, required in order to achieve sufficient integration time with high resolution.

  5. Advanced Docking System With Magnetic Initial Capture

    NASA Technical Reports Server (NTRS)

    Lewis, James L.; Carroll, Monty B.; Morales, Ray; Le, Thang

    2004-01-01

    An advanced docking system is undergoing development to enable softer, safer docking than was possible when using prior docking systems. This system is intended for original use in docking of visiting spacecraft and berthing the Crew Return Vehicle at the International Space Station (ISS). The system could also be adapted to a variety of other uses in outer space and on Earth, including mating submersible vehicles, assembling structures, and robotic berthing/handling of payloads and cargo. Heretofore, two large spacecraft have been docked by causing the spacecraft to approach each other at a speed sufficient to activate capture latches - a procedure that results in large docking loads and is made more difficult because of the speed. The basic design and mode of operation of the present advanced docking system would eliminate the need to rely on speed of approach to activate capture latches, thereby making it possible to reduce approach speed and thus docking loads substantially. The system would comprise an active subsystem on one spacecraft and a passive subsystem on another spacecraft with which the active subsystem will be docked. The passive subsystem would include an extensible ring containing magnetic striker plates and guide petals. The active subsystem would include mating guide petals and electromagnets containing limit switches and would be arranged to mate with the magnetic striker plates and guide petals of the passive assembly. The electromagnets would be carried on (but not rigidly attached to) a structural ring that would be instrumented with load sensors. The outputs of the sensors would be sent, along with position information, as feedback to an electronic control subsystem. The system would also include electromechanical actuators that would extend or retract the ring upon command by the control subsystem.

  6. Escorting commercial aircraft to reduce the MANPAD threat

    NASA Astrophysics Data System (ADS)

    Hock, Nicholas; Richardson, M. A.; Butters, B.; Walmsley, R.; Ayling, R.; Taylor, B.

    2005-11-01

    This paper studies the Man-Portable Air Defence System (MANPADS) threat against large commercial aircraft using flight profile analysis, engagement modelling and simulation. Non-countermeasure equipped commercial aircraft are at risk during approach and departure due to the large areas around airports that would need to be secured to prevent the use of highly portable and concealable MANPADs. A software model (CounterSim) has been developed and was used to simulate an SA-7b and large commercial aircraft engagement. The results of this simulation have found that the threat was lessened when a escort fighter aircraft is flown in the 'Centreline Low' position, or 25 m rearward from the large aircraft and 15 m lower, similar to the Air-to-Air refuelling position. In the model a large aircraft on approach had a 50% chance of being hit or having a near miss (within 20m) whereas escorted by a countermeasure equipped F-16 in the 'Centerline Low' position, this was reduced to only 14%. Departure is a particularly vulnerable time for large aircraft due to slow climb rates and the inability to fly evasive manoeuvres. The 'Centreline Low' escorted departure greatly reduced the threat to 16% hit or near miss from 62% for an unescorted heavy aircraft. Overall the CounterSim modelling has showed that escorting a civilian aircraft on approach and departure can reduce the MANPAD threat by 3 to 4 times.

  7. The Large Marine Ecosystem Approach for 21st Century Ocean Health and International Sustainable Development

    NASA Astrophysics Data System (ADS)

    Honey, K. T.

    2014-12-01

    The global coastal ocean and watersheds are divided into 66 Large Marine Ecosystems (LMEs), which encompass regions from river basins, estuaries, and coasts to the seaward boundaries of continental shelves and margins of major currents. Approximately 80% of global fisheries catch comes from LME waters. Ecosystem goods and services from LMEs contribute an estimated US 18-25 trillion dollars annually to the global economy in market and non-market value. The critical importance of these large-scale systems, however, is threatened by human populations and pressures, including climate change. Fortunately, there is pragmatic reason for optimism. Interdisciplinary frameworks exist, such as the Large Marine Ecosystem (LME) approach for adaptive management that can integrate both nature-centric and human-centric views into ecosystem monitoring, assessment, and adaptive management practices for long-term sustainability. Originally proposed almost 30 years ago, the LME approach rests on five modules are: (i) productivity, (ii) fish and fisheries, (iii) pollution and ecosystem health, (iv) socioeconomics, and (v) governance for iterative adaptive management at a large, international scale of 200,000 km2 or greater. The Global Environment Facility (GEF), World Bank, and United Nations agencies recognize and support the LME approach—as evidenced by over 3.15 billion in financial assistance to date for LME projects. This year of 2014 is an exciting milestone in LME history, after 20 years of the United Nations and GEF organizations adopting LMEs as a unit for ecosystem-based approaches to management. The LME approach, however, is not perfect. Nor is it immutable. Similar to the adaptive management framework it propones, the LME approach itself must adapt to new and emerging 21st Century technologies, science, and realities. The LME approach must further consider socioeconomics and governance. Within the socioeconomics module alone, several trillion-dollar opportunities exist for interdisciplinary integration with best practices in: (i) water-energy nexus infrastructure; (ii) responsible tourism; and (iii) open data innovations.

  8. A new practice-driven approach to develop software in a cyber-physical system environment

    NASA Astrophysics Data System (ADS)

    Jiang, Yiping; Chen, C. L. Philip; Duan, Junwei

    2016-02-01

    Cyber-physical system (CPS) is an emerging area, which cannot work efficiently without proper software handling of the data and business logic. Software and middleware is the soul of the CPS. The software development of CPS is a critical issue because of its complicity in a large scale realistic system. Furthermore, object-oriented approach (OOA) is often used to develop CPS software, which needs some improvements according to the characteristics of CPS. To develop software in a CPS environment, a new systematic approach is proposed in this paper. It comes from practice, and has been evolved from software companies. It consists of (A) Requirement analysis in event-oriented way, (B) architecture design in data-oriented way, (C) detailed design and coding in object-oriented way and (D) testing in event-oriented way. It is a new approach based on OOA; the difference when compared with OOA is that the proposed approach has different emphases and measures in every stage. It is more accord with the characteristics of event-driven CPS. In CPS software development, one should focus on the events more than the functions or objects. A case study of a smart home system is designed to reveal the effectiveness of the approach. It shows that the approach is also easy to be operated in the practice owing to some simplifications. The running result illustrates the validity of this approach.

  9. Using model based systems engineering for the development of the Large Synoptic Survey Telescope's operational plan

    NASA Astrophysics Data System (ADS)

    Selvy, Brian M.; Claver, Charles; Willman, Beth; Petravick, Don; Johnson, Margaret; Reil, Kevin; Marshall, Stuart; Thomas, Sandrine; Lotz, Paul; Schumacher, German; Lim, Kian-Tat; Jenness, Tim; Jacoby, Suzanne; Emmons, Ben; Axelrod, Tim

    2016-08-01

    We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST's Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders' needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.

  10. The NASA Space Launch System Program Systems Engineering Approach for Affordability

    NASA Technical Reports Server (NTRS)

    Hutt, John J.; Whitehead, Josh; Hanson, John

    2017-01-01

    The National Aeronautics and Space Administration is currently developing the Space Launch System to provide the United States with a capability to launch large Payloads into Low Earth orbit and deep space. One of the development tenets of the SLS Program is affordability. One initiative to enhance affordability is the SLS approach to requirements definition, verification and system certification. The key aspects of this initiative include: 1) Minimizing the number of requirements, 2) Elimination of explicit verification requirements, 3) Use of certified models of subsystem capability in lieu of requirements when appropriate and 4) Certification of capability beyond minimum required capability. Implementation of each aspect is described and compared to a "typical" systems engineering implementation, including a discussion of relative risk. Examples of each implementation within the SLS Program are provided.

  11. Human factors in technology replacement: a case study in interface design for a public transport monitoring system.

    PubMed

    Harper, J G; Fuller, R; Sweeney, D; Waldmann, T

    1998-04-01

    This paper describes ergonomic issues raised during a project to provide a replacement real-time bus route control system to a large public transport company. Task and system analyses highlighted several deficiencies in the original system architecture, the human-machine interfaces and the general approach to system management. The eventual live prototype replaced the existing original system for a trial evaluation period of several weeks. During this period a number of studies was conducted with the system users in order to measure any improvements the new system, with its ergonomic features, produced over the old. Importantly, the results confirmed that (a) general responsiveness and service quality were improved, and (b) users were more comfortable with the new design. We conclude with a number of caveats which we believe will be useful to any group addressing technology impact in a large organisation.

  12. An approach to addressing governance from a health system framework perspective

    PubMed Central

    2011-01-01

    As countries strive to strengthen their health systems in resource constrained contexts, policy makers need to know how best to improve the performance of their health systems. To aid these decisions, health system stewards should have a good understanding of how health systems operate in order to govern them appropriately. While a number of frameworks for assessing governance in the health sector have been proposed, their application is often hindered by unrealistic indicators or they are overly complex resulting in limited empirical work on governance in health systems. This paper reviews contemporary health sector frameworks which have focused on defining and developing indicators to assess governance in the health sector. Based on these, we propose a simplified approach to look at governance within a common health system framework which encourages stewards to take a systematic perspective when assessing governance. Although systems thinking is not unique to health, examples of its application within health systems has been limited. We also provide an example of how this approach could be applied to illuminate areas of governance weaknesses which are potentially addressable by targeted interventions and policies. This approach is built largely on prior literature, but is original in that it is problem-driven and promotes an outward application taking into consideration the major health system building blocks at various levels in order to ensure a more complete assessment of a governance issue rather than a simple input-output approach. Based on an assessment of contemporary literature we propose a practical approach which we believe will facilitate a more comprehensive assessment of governance in health systems leading to the development of governance interventions to strengthen system performance and improve health as a basic human right. PMID:22136318

  13. Segmentation and additive approach: A reliable technique to study noncovalent interactions of large molecules at the surface of single-wall carbon nanotubes.

    PubMed

    Torres, Ana M; Scheiner, Steve; Roy, Ajit K; Garay-Tapia, Andrés M; Bustamante, John; Kar, Tapas

    2016-08-05

    This investigation explores a new protocol, named Segmentation and Additive approach (SAA), to study exohedral noncovalent functionalization of single-walled carbon nanotubes with large molecules, such as polymers and biomolecules, by segmenting the entire system into smaller units to reduce computational cost. A key criterion of the segmentation process is the preservation of the molecular structure responsible for stabilization of the entire system in smaller segments. Noncovalent interaction of linoleic acid (LA, C18 H32 O2 ), a fatty acid, at the surface of a (10,0) zigzag nanotube is considered for test purposes. Three smaller segmented models have been created from the full (10,0)-LA system and interaction energies were calculated for these models and compared with the full system at different levels of theory, namely ωB97XD, LDA. The success of this SAA is confirmed as the sum of the interaction energies is in very good agreement with the total interaction energy. Besides reducing computational cost, another merit of SAA is an estimation of the contributions from different sections of the large system to the total interaction energy which can be studied in-depth using a higher level of theory to estimate several properties of each segment. On the negative side, bulk properties, such as HOMO-LUMO (highest occupied molecular orbital - lowest occupied molecular orbital) gap, of the entire system cannot be estimated by adding results from segment models. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Accurate visible speech synthesis based on concatenating variable length motion capture data.

    PubMed

    Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara

    2006-01-01

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.

  15. Inertial Manifold and Large Deviations Approach to Reduced PDE Dynamics

    NASA Astrophysics Data System (ADS)

    Cardin, Franco; Favretti, Marco; Lovison, Alberto

    2017-09-01

    In this paper a certain type of reaction-diffusion equation—similar to the Allen-Cahn equation—is the starting point for setting up a genuine thermodynamic reduction i.e. involving a finite number of parameters or collective variables of the initial system. We firstly operate a finite Lyapunov-Schmidt reduction of the cited reaction-diffusion equation when reformulated as a variational problem. In this way we gain a finite-dimensional ODE description of the initial system which preserves the gradient structure of the original one and that is exact for the static case and only approximate for the dynamic case. Our main concern is how to deal with this approximate reduced description of the initial PDE. To start with, we note that our approximate reduced ODE is similar to the approximate inertial manifold introduced by Temam and coworkers for Navier-Stokes equations. As a second approach, we take into account the uncertainty (loss of information) introduced with the above mentioned approximate reduction by considering the stochastic version of the ODE. We study this reduced stochastic system using classical tools from large deviations, viscosity solutions and weak KAM Hamilton-Jacobi theory. In the last part we suggest a possible use of a result of our approach in the comprehensive treatment non equilibrium thermodynamics given by Macroscopic Fluctuation Theory.

  16. A wave shaping approach of ferrite inductors exhibiting hysteresis using orthogonal field bias

    NASA Astrophysics Data System (ADS)

    Adly, A. A.; Abd-El-Hafiz, S. K.; Mahgoub, A. O.

    2018-05-01

    Advances in power electronic systems have considerably contributed to a wide spectrum of applications. In most power electronic circuits, inductors play crucial functions. Utilization of ferrite cores becomes a must when large inductances are required. Nevertheless, this results in an additional complexity due to their hysteresis nature. Recently, an efficient approach for modeling vector hysteresis using tri-node Hopfield neural networks (HNNs) has been introduced. This paper presents a wave shaping approach using hollow cylindrical ferrite core inductors having axial and toroidal windings. The approach investigates the possibility of tuning the inductor permeability to minimize circuit harmonics. Details of the approach are given in the paper.

  17. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.

  18. Functional renormalization group approach to SU(N ) Heisenberg models: Real-space renormalization group at arbitrary N

    NASA Astrophysics Data System (ADS)

    Buessen, Finn Lasse; Roscher, Dietrich; Diehl, Sebastian; Trebst, Simon

    2018-02-01

    The pseudofermion functional renormalization group (pf-FRG) is one of the few numerical approaches that has been demonstrated to quantitatively determine the ordering tendencies of frustrated quantum magnets in two and three spatial dimensions. The approach, however, relies on a number of presumptions and approximations, in particular the choice of pseudofermion decomposition and the truncation of an infinite number of flow equations to a finite set. Here we generalize the pf-FRG approach to SU (N )-spin systems with arbitrary N and demonstrate that the scheme becomes exact in the large-N limit. Numerically solving the generalized real-space renormalization group equations for arbitrary N , we can make a stringent connection between the physically most significant case of SU(2) spins and more accessible SU (N ) models. In a case study of the square-lattice SU (N ) Heisenberg antiferromagnet, we explicitly demonstrate that the generalized pf-FRG approach is capable of identifying the instability indicating the transition into a staggered flux spin liquid ground state in these models for large, but finite, values of N . In a companion paper [Roscher et al., Phys. Rev. B 97, 064416 (2018), 10.1103/PhysRevB.97.064416] we formulate a momentum-space pf-FRG approach for SU (N ) spin models that allows us to explicitly study the large-N limit and access the low-temperature spin liquid phase.

  19. Operative planning of functional sessions for multisatellite observation and communication systems

    NASA Astrophysics Data System (ADS)

    Darnopykh, Valeriy V.; Malyshev, Veniamin V.

    2012-04-01

    An important control aspect of modern satellite observation and communication systems is the control of the functional processes. Functional sessions proceed under conditions of restricted technical ability, large amounts or information to be processed by the on-board equipment, practice inequality of the received information, intentions of system management and operators, interests of customers and other factors. A large number of spacecrafts (SC) in orbital constellation is one of the most important factors affecting the functional process also. Besides that some modern projects of satellite systems are multifunctional that is mixed operations of observation and communication. Therefore the functioning of SC on-board equipment must be accurately co-ordinate. That is why the problem of operative planning the functioning of these systems, while directly affecting the efficiency of the system, is very complex and actual at present. A methodical approach and software package for operative planning of functional processes for satellite observation and communication systems, including multifunctional projects, are considered in the paper. The base scheme of this approach consists of four main stages: stage 1—modeling of SC orbital kinematics and dynamics; stage 2—modeling of system functional processes with all kind of restrictions and criterion function values; stage 3—solving an optimization tasks by numerical applicable algorithms and constructing the optimal (or accuracy) plans; stage 4—repeated plan optimization (different variants) and analyzing. Such scheme is the result of authors practical research which have been realized during last 15 years by the operative planning as for any kinds of single SC as for satellite systems with different structure of orbital constellation. The research helps to unify the procedure of operative planning, to formulate basic principles and approaches for its solving, to develop special software package. The main aspects of the approach proposed are illustrated in the paper. The results of the calculations of applied planning problems are presented. The objects of research in these problems are: projects of CBERS observation systems (1-3 SC) and projects of Iridium (66 SC) global communication system.

  20. A preliminary look at control augmented dynamic response of structures

    NASA Technical Reports Server (NTRS)

    Ryan, R. S.; Jewell, R. E.

    1983-01-01

    The augmentation of structural characteristics, mass, damping, and stiffness through the use of control theory in lieu of structural redesign or augmentation was reported. The standard single-degree-of-freedom system was followed by a treatment of the same system using control augmentation. The system was extended to elastic structures using single and multisensor approaches and concludes with a brief discussion of potential application to large orbiting space structures.

  1. DOD Space Systems: Additional Knowledge Would Better Support Decisions about Disaggregating Large Satellites

    DTIC Science & Technology

    2014-10-01

    considering new approaches. According to Air Force Space Command, U.S. space systems face intentional and unintentional threats , which have increased...life cycle costs • Demand for more satellites may stimulate new entrants and competition to lower acquisition costs. • Smaller, less complex...Fiscal constraints and growing threats to space systems have led DOD to consider alternatives for acquiring space-based capabilities, including

  2. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J; Lang, Michael; Pakin, Scott

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  3. Redesign of the Stabilized Pitch Control System of a Semi-Active Terminal Homing Missile System.

    DTIC Science & Technology

    1979-04-20

    34 AIEE Trans. Application and Industry , pp. 65-77, May 1961. [3] L. S. Shieh, "An Algebraic Approach to System Identification and Compensator Design...34A Quick Method for Estimating Closed-Loop Poles of Control Systems," Trans. AIEE, Applications and Industry , Vol. 76, pp. 80-87, May 1957. [101 C...Mathe- matical and Statistical Library). [16] C. J. Huang and L. S. Shieh, "Modeling Large Dynamical Systems with industrial Specifications," Int. J

  4. Conservation of design knowledge. [of large complex spaceborne systems

    NASA Technical Reports Server (NTRS)

    Sivard, Cecilia; Zweben, Monte; Cannon, David; Lakin, Fred; Leifer, Larry

    1989-01-01

    This paper presents an approach for acquiring knowledge about a design during the design process. The objective is to increase the efficiency of the lifecycle management of a space-borne system by providing operational models of the system's structure and behavior, as well as the design rationale, to human and automated operators. A design knowledge acquisition system is under development that compares how two alternative design versions meet the system requirements as a means for automatically capturing rationale for design changes.

  5. WAATS: A computer program for Weights Analysis of Advanced Transportation Systems

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.

    1974-01-01

    A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.

  6. Large area high-speed metrology SPM system.

    PubMed

    Klapetek, P; Valtr, M; Picco, L; Payton, O D; Martinek, J; Yacoot, A; Miles, M

    2015-02-13

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm(2) regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  7. Large area high-speed metrology SPM system

    NASA Astrophysics Data System (ADS)

    Klapetek, P.; Valtr, M.; Picco, L.; Payton, O. D.; Martinek, J.; Yacoot, A.; Miles, M.

    2015-02-01

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm2 regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  8. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  9. A hybrid algorithm for parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  10. Systems identification technology development for large space systems

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1982-01-01

    A methodology for synthesizinng systems identification, both parameter and state, estimation and related control schemes for flexible aerospace structures is developed with emphasis on the Maypole hoop column antenna as a real world application. Modeling studies of the Maypole cable hoop membrane type antenna are conducted using a transfer matrix numerical analysis approach. This methodology was chosen as particularly well suited for handling a large number of antenna configurations of a generic type. A dedicated transfer matrix analysis, both by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, is significantly more efficient not only in computer time required but, more importantly, in the time needed to review and interpret the results.

  11. Adaptive/learning control of large space structures - System identification techniques. [for multi-configuration flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Montgomery, R. C.

    1980-01-01

    Techniques developed for the control of aircraft under changing operating conditions are used to develop a learning control system structure for a multi-configuration, flexible space vehicle. A configuration identification subsystem that is to be used with a learning algorithm and a memory and control process subsystem is developed. Adaptive gain adjustments can be achieved by this learning approach without prestoring of large blocks of parameter data and without dither signal inputs which will be suppressed during operations for which they are not compatible. The Space Shuttle Solar Electric Propulsion (SEP) experiment is used as a sample problem for the testing of adaptive/learning control system algorithms.

  12. A human factors approach to range scheduling for satellite control

    NASA Technical Reports Server (NTRS)

    Wright, Cameron H. G.; Aitken, Donald J.

    1991-01-01

    Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.

  13. Contemplative Neuroscience as an Approach to Volitional Consciousness

    NASA Astrophysics Data System (ADS)

    Thompson, Evan

    This chapter presents a methodological approach to volitional consciousness for cognitive neuroscience based on studying the voluntary self-generation and self-regulation of mental states in meditation. Called contemplative neuroscience, this approach views attention, awareness, and emotion regulation as flexible and trainable skills, and works with experimental participants who have undergone training in contemplative practices designed to hone these skills. Drawing from research on the dynamical neural correlates of contemplative mental states and theories of large-scale neural coordination dynamics, I argue for the importance of global system causation in brain activity and present an "interventionist" approach to intentional causation.

  14. System analysis of automated speed enforcement implementation.

    DOT National Transportation Integrated Search

    2016-04-01

    Speeding is a major factor in a large proportion of traffic crashes, injuries, and fatalities in the United States. Automated Speed Enforcement (ASE) is one of many approaches shown to be effective in reducing speeding violations and crashes. However...

  15. Crop water productivity and irrigation management

    USDA-ARS?s Scientific Manuscript database

    Modern irrigation systems offer large increases in crop water productivity compared with rainfed or gravity irrigation, but require different management approaches to achieve this. Flood, sprinkler, low-energy precision application, LEPA, and subsurface drip irrigation methods vary widely in water a...

  16. Evolving from bioinformatics in-the-small to bioinformatics in-the-large.

    PubMed

    Parker, D Stott; Gorlick, Michael M; Lee, Christopher J

    2003-01-01

    We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.

  17. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  18. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  19. Constraints and System Primitives in Achieving Multilevel Security in Real Time Distributed System Environment

    DTIC Science & Technology

    1994-04-18

    because they represent a microkernel and monolithic kernel approach to MLS operating system issues. TMACH is I based on MACH, a distributed operating...the operating system is [L.sed on a microkernel design or a monolithic kernel design. This distinction requires some caution since monolithic operating...are provided by 3 user-level processes, in contrast to standard UNIX, which has a large monolithic kernel that pro- I - 22 - Distributed O)perating

  20. A Benchmark for Comparing Different Approaches for Specifying and Verifying Real-Time Systems

    DTIC Science & Technology

    1993-01-01

    To be considered correct or useful, real - time systems must deliver results within specified time intervals, either without exception or with high...probability. Recently, a large number of formal methods have been invented for specifying and verifying real - time systems . It has been suggested that...these formal methods need to be tested out on actual real - time systems . Such testing will allow the scalability of the methods to be assessed and also

  1. Intelligent, Self-Diagnostic Thermal Protection System for Future Spacecraft

    NASA Technical Reports Server (NTRS)

    Hyers, Robert W.; SanSoucie, Michael P.; Pepyne, David; Hanlon, Alaina B.; Deshmukh, Abhijit

    2005-01-01

    The goal of this project is to provide self-diagnostic capabilities to the thermal protection systems (TPS) of future spacecraft. Self-diagnosis is especially important in thermal protection systems (TPS), where large numbers of parts must survive extreme conditions after weeks or years in space. In-service inspections of these systems are difficult or impossible, yet their reliability must be ensured before atmospheric entry. In fact, TPS represents the greatest risk factor after propulsion for any transatmospheric mission. The concepts and much of the technology would be applicable not only to the Crew Exploration Vehicle (CEV), but also to ablative thermal protection for aerocapture and planetary exploration. Monitoring a thermal protection system on a Shuttle-sized vehicle is a daunting task: there are more than 26,000 components whose integrity must be verified with very low rates of both missed faults and false positives. The large number of monitored components precludes conventional approaches based on centralized data collection over separate wires; a distributed approach is necessary to limit the power, mass, and volume of the health monitoring system. Distributed intelligence with self-diagnosis further improves capability, scalability, robustness, and reliability of the monitoring subsystem. A distributed system of intelligent sensors can provide an assurance of the integrity of the system, diagnosis of faults, and condition-based maintenance, all with provable bounds on errors.

  2. The R-Shell approach - Using scheduling agents in complex distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre

    1993-01-01

    Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

  3. Highly stable families of soliton molecules in fiber-optic systems

    NASA Astrophysics Data System (ADS)

    Moubissi, A.-B.; Tchofo Dinda, P.; Nse Biyoghe, S.

    2018-04-01

    We develop an efficient approach to the design of families of single solitons and soliton molecules most suited to a given fiber system. The obtained solitonic entities exhibit very high stability, with a robustness which allows them to propagate over thousands of kilometers and to survive collisions with other solitonic entities. Our approach enables the generation of a large number of solitonic entities, including families of single solitons and two-soliton molecules, which can be distinguished sufficiently by their respective profiles or energy levels, and so can be easily identifiable and detectable without ambiguity. We discuss the possible use of such solitonic entities as symbols of a multi-level modulation format in fiber-optic communication systems.

  4. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  5. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  6. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  7. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  8. Scope of Various Random Number Generators in ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling salesman problem, are several quasi- and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is mainly to seek an answer to the controversial issue "which generator is the best in terms of quality of the result (accuracy) as well as cost of producing the result (time/computational complexity) in a probabilistic/statistical sense."

  9. Treatment wetlands in decentralised approaches for linking sanitation to energy and food security.

    PubMed

    Langergraber, Guenter; Masi, Fabio

    2018-02-01

    Treatment wetlands (TWs) are engineered systems that mimic the processes in natural wetlands with the purpose of treating contaminated water. Being a simple and robust technology, TWs are applied worldwide to treat various types of water. Besides treated water for reuse, TWs can be used in resources-oriented sanitation systems for recovering nutrients and carbon, as well as for growing biomass for energy production. Additionally, TWs provide a large number of ecosystem services. Integrating green infrastructure into urban developments can thus facilitate circular economy approaches and has positive impacts on environment, economy and health.

  10. C++, objected-oriented programming, and astronomical data models

    NASA Technical Reports Server (NTRS)

    Farris, A.

    1992-01-01

    Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.

  11. Integrated control-system design via generalized LQG (GLQG) theory

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.

    1989-01-01

    Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.

  12. System-morphological approach: Another look at morphology research and geomorphological mapping

    NASA Astrophysics Data System (ADS)

    Lastochkin, Alexander N.; Zhirov, Andrey I.; Boltramovich, Sergei F.

    2018-02-01

    A large number of studies require a clear and unambiguous morphological basis. For over thirty years, Russian scientists have been applying a system-morphological approach for the Arctic and Antarctic research, ocean floor investigation, for various infrastructure construction projects (oil and gas, sports, etc.), in landscape and environmental studies. This article is a review aimed to introduce this methodological approach to the international scientific community. The details of the methods and techniques can be found in a series of earlier papers published in the Russian language in 1987-2016. The proposed system-morphological approach includes: 1) partitioning of the Earth surface, i.e. precise identification of linear, point, and areal elements of topography considered as a two-dimensional surface without any geological substance; 2) further identification of larger formations: geomorphological systems and regions; 3) analysis of structural relations and symmetry of topography; and 4) various dynamic (litho- and glaciodynamic, tectonic, etc.) interpretations of the observed morphology. This method can be used to study the morphology of the surface topography as well as less accessible interfaces such as submarine and subglacial ones.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@yauza.ru; Pylyov, S. S.

    This paper presents the formulation of the problem and the methodical approach for solving large systems of linear differential equations describing nonstationary processes with the use of CUDA technology; this approach is implemented in the ANGEL program. Results for a test problem on transport of radioactive products over loops of a nuclear power plant are given. The possibilities for the use of the ANGEL program for solving various problems that simulate arbitrary nonstationary processes are discussed.

  14. Modular Ligation Extension of Guide RNA Operons (LEGO) for Multiplexed dCas9 Regulation of Metabolic Pathways in Saccharomyces cerevisiae.

    PubMed

    Deaner, Matthew; Holzman, Allison; Alper, Hal S

    2018-04-16

    Metabolic engineering typically utilizes a suboptimal step-wise gene target optimization approach to parse a highly connected and regulated cellular metabolism. While the endonuclease-null CRISPR/Cas system has enabled gene expression perturbations without genetic modification, it has been mostly limited to small sets of gene targets in eukaryotes due to inefficient methods to assemble and express large sgRNA operons. In this work, we develop a TEF1p-tRNA expression system and demonstrate that the use of tRNAs as splicing elements flanking sgRNAs provides higher efficiency than both Pol III and ribozyme-based expression across a variety of single sgRNA and multiplexed contexts. Next, we devise and validate a scheme to allow modular construction of tRNA-sgRNA (TST) operons using an iterative Type IIs digestion/ligation extension approach, termed CRISPR-Ligation Extension of sgRNA Operons (LEGO). This approach enables facile construction of large TST operons. We demonstrate this utility by constructing a metabolic rewiring prototype for 2,3-butanediol production in 2 distinct yeast strain backgrounds. These results demonstrate that our approach can act as a surrogate for traditional genetic modification on a much shorter design-cycle timescale. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A new look at the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.

    1994-01-01

    The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.

  16. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems.

    PubMed

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-21

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  17. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems

    NASA Astrophysics Data System (ADS)

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-01

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  18. A systematic approach to baseline assessment of nursing documentation and enterprise-wide prioritization for electronic conversion.

    PubMed

    Dykes, Patricia C; Spurr, Cindy; Gallagher, Joan; Li, Qi; Ives Erickson, Jeanette

    2006-01-01

    An important challenge associated with making the transition from paper to electronic documentation systems is achieving consensus regarding priorities for electronic conversion across diverse groups. In our work we focus on applying a systematic approach to evaluating the baseline state of nursing documentation across a large healthcare system and establishing a unified vision for electronic conversion. A review of the current state of nursing documentation across PHS was conducted using structured tools. Data from this assessment was employed to facilitate an evidence-based approach to decision-making regarding conversion to electronic documentation at local and PHS levels. In this paper we present highlights of the assessment process and the outcomes of this multi-site collaboration.

  19. Competitive adsorption in model charged protein mixtures: Equilibrium isotherms and kinetics behavior

    NASA Astrophysics Data System (ADS)

    Fang, F.; Szleifer, I.

    2003-07-01

    The competitive adsorption of proteins of different sizes and charges is studied using a molecular theory. The theory enables the study of charged systems explicitly including the size, shape, and charge distributions in all the molecular species in the mixture. Thus, this approach goes beyond the commonly used Poisson-Boltzmann approximation. The adsorption isotherms of the protein mixtures are studied for mixtures of two proteins of different size and charge. The amount of proteins adsorbed and the fraction of each protein is calculated as a function of the bulk composition of the solution and the amount of salt in the system. It is found that the total amount of proteins adsorbed is a monotonically decreasing function of the fraction of large proteins on the bulk solution and for fixed protein composition of the salt concentration. However, the composition of the adsorbed layer is a complicated function of the bulk composition and solution ionic strength. The structure of the adsorb layer depends upon the bulk composition and salt concentration. In general, there are multilayers adsorbed due to the long-range character of the electrostatic interactions. When the composition of large proteins in bulk is in very large excess it is found that the structure of the adsorb multilayer is such that the layer in contact with the surface is composed by a mixture of large and small proteins. However, the second and third layers are almost exclusively composed of large proteins. The theory is also generalized to study the time-dependent adsorption. The approach is based on separation of time scales into fast modes for the ions from the salt and the solvent and slow for the proteins. The dynamic equations are written for the slow modes, while the fast ones are obtained from the condition of equilibrium constrained to the distribution of proteins given by the slow modes. Two different processes are presented: the adsorption from a homogeneous solution to a charged surface at low salt concentration, and large excess of the large proteins in bulk. The second process is the kinetics of structural and adsorption change by changing the salt concentration of the bulk solution from low to high. The first process shows a large overshoot of the large proteins on the surface due to their excess in solution, followed by a surface replacement by the smaller molecules. The second process shows a very fast desorption of the large proteins followed by adsorption at latter stages. This process is found to be driven by large electrostatic repulsions induced by the fast ions from the salt approaching the surface. The relevance of the theoretical predictions to experimental system and possible directions for improvements of the theory are discussed.

  20. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    NASA Astrophysics Data System (ADS)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.

Top